CN111683840A - Interactive method and system of movable platform, movable platform and storage medium - Google Patents

Interactive method and system of movable platform, movable platform and storage medium Download PDF

Info

Publication number
CN111683840A
CN111683840A CN201980008059.6A CN201980008059A CN111683840A CN 111683840 A CN111683840 A CN 111683840A CN 201980008059 A CN201980008059 A CN 201980008059A CN 111683840 A CN111683840 A CN 111683840A
Authority
CN
China
Prior art keywords
movable platform
image
display
dimensional
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980008059.6A
Other languages
Chinese (zh)
Other versions
CN111683840B (en
Inventor
徐彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111683840A publication Critical patent/CN111683840A/en
Application granted granted Critical
Publication of CN111683840B publication Critical patent/CN111683840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/806Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for aiding parking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications

Abstract

An interaction method, a system, a movable platform and a storage medium of the movable platform are provided, wherein the method comprises the following steps: projecting the three-dimensional point cloud data acquired by the sensor at the same moment into image data acquired by the camera for fusion processing to obtain a fused image (S301); rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located (S302); outputting a three-dimensional visualization of the environment around which the movable platform is located on a display interface (S303). By the method, the three-dimensional visual display of the surrounding environment of the movable platform can be realized, and the use experience of a user is improved.

Description

Interactive method and system of movable platform, movable platform and storage medium
Technical Field
The present invention relates to the field of control technologies, and in particular, to an interaction method and system for a mobile platform, and a storage medium.
Background
Currently, the display/interaction mode of electronic sensor systems for mobile platforms such as automobiles is still at a relatively simple level. For example, in the case of a car backing radar system (ultrasonic system), the driver is usually prompted about the distance by sound, and the sound is louder as the distance from the obstacle is closer to the obstacle to enhance the warning effect; for a central control system, a navigation function, an entertainment function and a reversing image function are usually provided, and a driver obtains less information from the central control system; for the dashboard display system, usually, information about the operation status of some automobile components is provided, such as prompting the driver to open/close the door, etc., although the existing electronic dashboard can provide more and richer information, it is still basically in the state of integrating the original central control display information into the dashboard, for example, providing navigation function in the dashboard, and there is no more display or interaction function.
With the development of the automobile auxiliary driving technology and the automatic driving technology, the perception capability of the automobile to the surrounding environment is stronger and stronger, and the traditional sensor system display/interaction mode is difficult to show more information acquired by the existing sensor system, so that how to provide a better display interaction mode to improve the driving safety and the user experience in cooperation with the sensor system has great significance.
Disclosure of Invention
The embodiment of the invention provides an interaction method and system of a movable platform, the movable platform and a storage medium, which can display a three-dimensional visual image of the surrounding environment of the movable platform through a display interface, thereby improving the use experience of a user.
In a first aspect, an embodiment of the present invention provides an interaction method for a movable platform, which is applied to the movable platform, where a display interface is arranged on the interaction system, and the method includes:
projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image;
rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located;
outputting a three-dimensional visualization of the environment surrounding the movable platform on the display interface.
In a second aspect, an embodiment of the present invention provides an interactive system, which is applied to a movable platform, where a display interface is arranged on the interactive system, and the system includes: one or more processors, working collectively or individually, the processors being configured to:
projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image;
rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located;
outputting a three-dimensional visualization of the environment surrounding the movable platform on the display interface.
In a third aspect, an embodiment of the present invention provides a movable platform, including: a body; the power system is arranged on the machine body and used for providing moving power for the movable platform;
the processor is used for projecting the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image; rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located; outputting a three-dimensional visualization of the environment around which the movable platform is located on a display interface.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method according to the first aspect.
In the embodiment of the invention, the interactive system can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image, render the fusion image, determine the three-dimensional visual image of the surrounding environment where the movable platform is located, and output the three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface. By this embodiment, the user experience can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a display interface of an interactive system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an interactive system according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an interaction method of a movable platform according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating an interaction method for a movable platform according to an embodiment of the present invention;
fig. 5 is a display interface of a three-dimensional visual image when the movable platform is in a reverse mode according to an embodiment of the present invention;
FIG. 6 is a display interface of a three-dimensional visual image when the movable platform is in a lane-changing state according to an embodiment of the present invention;
FIG. 7 is a display interface of a three-dimensional visual image of a movable platform in an acceleration state according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an interactive system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The interaction method of the movable platform provided in the embodiment of the present invention may be executed by an interaction system, and the interaction system may include a movable platform and an interaction device. In some embodiments, the interaction device may be disposed on the movable platform; in some embodiments, the interaction device may be independent of the movable platform. For example, the interaction device may be disposed on a mobile phone, a tablet computer, a smart watch, or other terminals that establish a communication connection with the mobile platform. In one embodiment, the interaction device may also be provided in a cloud processor. In other embodiments, the interaction device may be applied to other devices such as unmanned vehicles, unmanned planes, robots, unmanned ships, and the like.
According to the embodiment of the invention, a complete and simple three-dimensional display interface for observing the surrounding environment of the movable platform is constructed aiming at different motion scenes, so that a driver can conveniently and quickly know the surrounding environment, a driving blind area is eliminated, the time for the driver to switch and check different sensors on a traditional automobile is eliminated, the driving safety is improved, and the driving experience is improved.
In one embodiment, the interaction device may project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fused image, render the fused image, determine a three-dimensional visual image of the environment around the movable platform, and output the three-dimensional visual image of the environment around the movable platform on the display interface. Specifically, the interaction device may project three-dimensional point cloud data acquired by the sensor at the same time to an image protector acquired by the camera for fusion processing. Therefore, a user can acquire a three-dimensional visual image on one display interface, so as to acquire three-dimensional point cloud information and/or image information of the surrounding environment where the movable platform is located at the same time.
In one embodiment, the interaction device may be located anywhere in the movable platform that is convenient for the user to operate. The interaction equipment is arranged at a position convenient for user operation in the movable platform, so that a driver or a passenger can conveniently view content information displayed on a display interface on the interaction equipment and conveniently control the display angle of the three-dimensional visual image through the interaction equipment.
In one embodiment, one or more sensors may be disposed on the movable platform for acquiring point cloud data of the environment surrounding the movable platform; in some embodiments, the movable platform includes a camera or like capture device thereon for capturing image data of the environment surrounding the movable platform.
Specifically, fig. 1 is an example of a schematic illustration of a display interface of the interactive system, and fig. 1 is a schematic view of a display interface of an interactive system according to an embodiment of the present invention. As shown in fig. 1, the display interface of the interactive system includes a display area 11 and a touch area 12. In some embodiments, the display area 11 is used to display a three-dimensional visualization of the environment surrounding the movable platform. In some embodiments, the touch area 12 includes an angle display icon and a sensor type icon; the angle display icons comprise icons such as a reversing display 121, a left lane changing display 122, a right lane changing display 123, an accelerating display 124 and the like, and are used for indicating the visual angle of the three-dimensional visual image; the sensor type icons include an image sensor 125, a laser radar 126, a millimeter wave radar 127, an ultrasonic radar 128, etc. for the user to select the type of sensor to acquire the point cloud data.
According to the embodiment of the invention, the visual angle of the three-dimensional visual image displayed on the display interface can be determined according to the motion state of the movable platform, so that the safety of the movable platform in the moving process is improved.
The interactive system proposed by the embodiment of the present invention is schematically illustrated with reference to fig. 2.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an interactive system according to an embodiment of the present invention, where the interactive system shown in fig. 2 includes: the mobile platform 21 is provided with the interaction device 22, the mobile platform 21 may include a power system 211, and the power system 211 is used for providing power for the mobile platform 21 to operate.
In the embodiment of the present invention, the interaction device 22 is disposed in the movable platform 21, a display interface is disposed on the interaction device 22, and the interaction device 22 is connected to the movable platform 21. The display interface of the interaction device 22 comprises a display area for displaying a three-dimensional visualization of the environment surrounding the movable platform 21 and a touch area comprising an angle display icon and a sensor type icon. The interactive device 22 may obtain the three-dimensional point cloud data collected by the sensor selected by the user according to the sensor type icon selected by the user in the touch area, and may obtain the image data collected by the camera at the interactive device. The interactive system 22 projects the three-dimensional point cloud data into the image data for fusion processing to obtain a fusion image, renders the fusion image to obtain a three-dimensional visualization image of the surrounding environment where the movable platform is located, and outputs the three-dimensional visualization image of the surrounding environment where the movable platform is located on the display area of the display interface.
The following describes schematically an interaction method of a movable platform provided in an embodiment of the present invention with reference to the accompanying drawings.
Referring to fig. 3 specifically, fig. 3 is a schematic flowchart of an interaction method of a movable platform according to an embodiment of the present invention, where the method may be executed by an interaction system, a display interface is disposed on the interaction system, and the interaction system is in communication connection with the movable platform. Specifically, the method of the embodiment of the present invention includes the following steps.
S301: and projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image.
In the embodiment of the invention, the interactive system can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image. In certain embodiments, the sensors include, but are not limited to, any one or more of image sensors, lidar, millimeter wave radar, and ultrasonic radar.
In some embodiments, before projecting the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing, the interactive system may acquire a touch operation of a user on a sensor type icon in the touch area, so as to determine, according to the touch operation on the sensor type icon, a target sensor corresponding to the sensor type icon selected by the touch operation, and acquire the three-dimensional point cloud data acquired by the target sensor. In some embodiments, the touch operation includes, but is not limited to, a click operation, a slide operation, a drag operation on the sensor type icon.
Taking fig. 1 as an example, assuming that the interactive system obtains the touch operation of the user on the icon of the laser radar 126 in the touch area 12, it may be determined that the laser radar 126 on the movable platform is a target sensor, and obtain three-dimensional point cloud data corresponding to the environment around the movable platform, which is acquired by the laser radar 126.
Therefore, the sensor for collecting the point cloud data of the surrounding environment of the movable platform is selected by the user independently, so that the flexibility of collecting the point cloud data is improved, and the use experience of the user is improved.
In one embodiment, before the interactive system projects the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing, if the touch operation of the user on the sensor type icon in the touch area is not acquired, the three-dimensional point cloud data can be acquired according to a preset type of sensor.
In one embodiment, when the interactive system projects three-dimensional point cloud data acquired by a sensor into image data acquired by a camera for fusion processing to obtain a fusion image, a coordinate transformation relationship between the acquired image data and the three-dimensional point cloud data may be determined, point cloud data corresponding to the image data and the three-dimensional point cloud data are transformed into the same coordinate system based on the coordinate transformation relationship, and the three-dimensional point cloud data transformed into the same coordinate system is projected into the image data for fusion processing to obtain the fusion image.
In one embodiment, when the interactive system converts the point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system and projects the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing, the interactive system may perform coordinate transformation on the image data and fuse the image data after coordinate transformation with the three-dimensional point cloud data; or carrying out coordinate transformation on the three-dimensional point cloud data, and fusing the three-dimensional point cloud data subjected to coordinate transformation with the image data.
Therefore, the method and the device can improve the flexibility of the fusion processing of the three-dimensional point cloud data and the image data.
S302: rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located.
In the embodiment of the invention, the interactive system can render the fused image and determine the three-dimensional visual image of the surrounding environment where the movable platform is located.
In one embodiment, when rendering the fused image and determining the three-dimensional visualization image of the environment around the movable platform, the interactive system may project the fused image onto a two-dimensional plane to obtain at least one projection image, determine obstacle information of the environment around the movable platform according to the at least one projection image, and determine the three-dimensional visualization image according to the obstacle information.
In some embodiments, the interactive system may determine a three-dimensional visualization based on the location of the obstacle and the distance of the obstacle from the movable platform when determining the three-dimensional visualization based on the obstacle information.
In some embodiments, the obstacle information includes any one or more of position information, size information, and distance information of the obstacle. In certain embodiments, the obstacle comprises any one or more of a pedestrian, a vehicle, an animal, a plant.
S303: outputting a three-dimensional visualization of the environment around which the movable platform is located on a display interface.
In the embodiment of the invention, the interactive system can output the three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface. In some embodiments, the display interface includes a display area and a touch area, the touch area including an angle display icon and a sensor type icon. In some embodiments, the display interface may be a touch display interface.
In one embodiment, the interactive system may output a three-dimensional visualization of the environment surrounding the movable platform in a display area on the display interface while outputting the three-dimensional visualization of the environment surrounding the movable platform on the display interface. Taking fig. 1 as an example, the interactive system may output a three-dimensional visualization of the environment around which the movable platform is located in a display area 11 on the display interface.
In an embodiment, when the interactive system outputs a three-dimensional visual image of the surrounding environment where the movable platform is located in a display area on the display interface, the interactive system may acquire a touch operation of a user on an angle display icon in the touch area, generate an angle display instruction corresponding to the touch operation according to the touch operation, and display the three-dimensional visual image on the display area of the display interface according to a display viewing angle indicated by the angle display instruction. In some embodiments, the touch operation includes, but is not limited to, a click operation, a slide operation, and a drag operation on the angle display icon.
Taking fig. 1 as an example, if the interactive system obtains a touch operation of a user on the back-up display 121 icon in the touch area 12, an angle display instruction corresponding to the back-up display 121 icon may be generated according to the touch operation, and the three-dimensional visual image is displayed on the display area of the display interface according to the display viewing angle indicated by the angle display instruction.
In one embodiment, when the interactive system outputs a three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface, the interactive system may acquire a current motion state of the movable platform, and determine a display view angle of the three-dimensional visual image according to the current motion state, so that the three-dimensional visual image is displayed in a display area of the display interface according to the display view angle.
In the embodiment of the invention, the interactive system can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image, render the fusion image, determine the three-dimensional visual image of the surrounding environment where the movable platform is located, and output the three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface. By the implementation, the three-dimensional visual image of the surrounding environment where the movable platform is located can be output, and the use experience of a user is improved.
Referring to fig. 4 in particular, fig. 4 is a flowchart illustrating another interaction method for a movable platform according to an embodiment of the present invention, where the method may be executed by an interaction system, and the interaction system is explained as described above. Specifically, the method according to the embodiment of the present invention is different from the method shown in fig. 3 in that the method according to the embodiment of the present invention schematically illustrates the determination of the display perspective of the three-dimensional visual image according to the motion state of the movable platform, and the method according to the embodiment of the present invention includes the following steps.
S401: and projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image.
In the embodiment of the invention, the interactive system can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image.
S402: rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located.
In the embodiment of the invention, the interactive system can render the fused image and determine the three-dimensional visual image of the surrounding environment where the movable platform is located.
S403: and acquiring the current motion state of the movable platform.
In the embodiment of the invention, the interactive system can acquire the current motion state of the movable platform. In some embodiments, the current motion state of the movable platform includes, but is not limited to, any one or more of a reverse state, a lane change state, a braking state, an acceleration state, and a deceleration state.
S404: and determining the display visual angle of the three-dimensional visual image according to the current motion state.
In the embodiment of the present invention, the interactive system may determine the display viewing angle of the three-dimensional visual image according to the current motion state.
In one embodiment, the current motion state includes a reverse state, the interactive system may determine the current motion state of the movable platform when determining the display view angle of the three-dimensional visual image according to the current motion state, and may determine the display view angle of the three-dimensional visual image as the three-dimensional visual image within a first preset region from the movable platform when determining that the movable platform is in the reverse state.
In one embodiment, the interactive system may display an obstacle in the three-dimensional visual image and a distance between the obstacle and the movable platform if the obstacle is detected to be present in a first preset area away from the movable platform in the displayed three-dimensional visual image while the movable platform is in the reverse state.
In an embodiment, when determining that the movable platform is in the reverse state, the interactive system may determine that the movable platform is in the reverse state according to obtaining a touch operation of a user on a reverse display icon in angle display icons of a touch area.
In other embodiments, the interactive system may determine whether the movable platform is in a reverse state according to the acquired reverse gear information, the rotation direction information of the tire, and the like, where how to determine whether the movable platform is in the reverse state is not specifically limited.
Specifically, fig. 5 is an example of a display view angle when the movable platform is in a reverse state, and fig. 5 is a display interface of a three-dimensional visual image when the movable platform is in the reverse state according to an embodiment of the present invention. As shown in fig. 5, the movable platform is a vehicle 51, the display view angle of the three-dimensional visual image is a first preset area 52 which displays the distance from the tail of the vehicle 51 to the movable platform, and the distance from the obstacle 521 to the tail of the vehicle 51 is 1m when the obstacle 521 is detected in the first preset area 52.
Therefore, the surrounding environment information in a first preset area away from the tail of the vehicle is displayed in the reversing state, the barrier in the first preset area is displayed, and the distance between the barrier and the tail of the vehicle is displayed, so that a user can be reminded of the position of the barrier and the position away from the tail of the vehicle in the reversing state, the barrier is prevented from being collided, and the safety of the movable platform in the reversing state is improved.
In one embodiment, the current motion state includes a lane change state, and when the interactive system determines the display view angle of the three-dimensional visual image according to the current motion state, if it is determined that the movable platform is in the lane change state, the display view angle of the three-dimensional visual image may be determined as the three-dimensional visual image of the lane in which the lane change is displayed in the second preset area.
In an embodiment, when determining that the movable platform is in the lane change state, the interactive system may determine that the movable platform is in the lane change state according to obtaining a touch operation of a user on a lane change display icon in angle display icons of a touch area.
In one embodiment, the lane change display icons include a left lane change display icon and a right lane change display icon, and if the interactive system obtains a touch operation of a user on the left lane change display icon in the touch area, it may be determined that the movable platform is in a left lane change state; if the interactive system obtains the touch operation of the user on the right lane change display icon in the touch area, it can be determined that the movable platform is in the state of changing lanes to the right.
In other embodiments, the interactive system may determine whether the movable platform is in the lane change state according to the acquired turn light information (e.g., turn on left turn light or turn on right), the tire rotation angle, the tire rotation direction, and other information, and how to determine whether the movable platform is in the lane change state is not specifically limited herein.
Specifically, fig. 6 is an example of a display view angle when the movable platform is in the lane change state, and fig. 6 is a display interface of a three-dimensional visual image when the movable platform is in the lane change state according to an embodiment of the present invention. As shown in fig. 6, the movable platform is a vehicle 61, a current lane 62, a left lane-changing lane 63, the left lane-changing lane 63 includes a vehicle 64, the three-dimensional visual image is displayed in a view angle of displaying a three-dimensional visual image of the left lane-changing lane 63 in a second preset area 65, the left lane-changing lane 63 detects that an obstacle is the vehicle 64 in the second preset area 65, and the distance from the vehicle 64 to the vehicle 61 is determined to be 10 m.
It can be seen that when the movable platform is in the lane changing state, the three-dimensional visual image of the lane changing in the second preset area is displayed, the barrier in the second preset area and the distance between the barrier and the tail of the vehicle are displayed, the position of the barrier and the distance between the barrier and the movable platform can be reminded to a user when the lane changing is performed, the user can conveniently judge whether the distance between the barrier and the movable platform is within the safe distance range, the barrier is prevented from being impacted in the lane changing process, and the safety of the movable platform in the lane changing state is improved.
In an embodiment, the current motion state includes an acceleration state, and when the interactive system determines the display perspective of the three-dimensional visual image according to the current motion state, if it is determined that the movable platform is in the acceleration state, the display perspective of the three-dimensional visual image may be determined as a three-dimensional visual image in a third preset area away from the movable platform.
In one embodiment, when determining that the movable platform is in an acceleration state, the interactive system may determine that the movable platform is in the acceleration state according to obtaining a touch operation of a user on an acceleration display icon in angle display icons of a touch area.
In other embodiments, the interactive system may determine whether the movable platform is in an acceleration state or a deceleration state according to the acquired speed information of the movable platform.
Specifically, fig. 7 is an example of a display view angle when the movable platform is in an acceleration state, and fig. 7 is a display interface of a three-dimensional visual image when the movable platform is in the acceleration state according to an embodiment of the present invention. As shown in fig. 7, the movable platform is a vehicle 70, and the three-dimensional visual image is displayed at a viewing angle to display the three-dimensional visual image of the vehicle head (i.e. the acceleration direction) of the vehicle 70 in a third preset area 77, wherein it is detected that the current lane 71 includes a vehicle 72 in the third preset area 77 of the acceleration direction, the distance between the vehicle 72 and the vehicle 70 is 15m, the left lane-changing lane 73 includes a vehicle 74, the distance between the vehicle 74 and the vehicle 70 is 5m, the right lane-changing lane 75 includes a vehicle 76, and the distance between the vehicle 76 and the vehicle 70 is 10 m.
Therefore, the surrounding environment information in a third preset area away from the vehicle head is displayed when the movable platform is in the acceleration state, the barrier in the third preset area is displayed, and the distance between the barrier and the vehicle tail is displayed, so that a user can be reminded of the position of the barrier and the position away from the vehicle head when the movable platform accelerates, the barrier is prevented from being collided, and the safety of the movable platform in the acceleration state is improved.
S405: and displaying the three-dimensional visual image in a display area of a display interface according to the display visual angle.
In the embodiment of the present invention, the interactive system may display the three-dimensional visual image in the display area of the display interface according to the display viewing angle.
In the embodiment of the invention, the interactive system can acquire the current motion state of the movable platform, determine the display visual angle of the three-dimensional visual image of the environment around the movable platform according to the current motion state, and display the three-dimensional visual image in the display area of the display interface according to the display visual angle. By the implementation mode, the display visual angle of the three-dimensional visual image can be automatically determined according to the motion state of the movable platform, and the display efficiency and flexibility of the three-dimensional visual image are improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an interactive system applied to a movable platform, the interactive system being provided with a display interface, the system including a memory 801, one or more processors 802, and a data interface 803 according to an embodiment of the present invention;
the memory 801 may include a volatile memory (volatile memory); the memory 801 may also include a non-volatile memory (non-volatile memory); the memory 801 may also comprise a combination of memories of the kind described above. The processor 802 may be a Central Processing Unit (CPU). The processor 802 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
The processor 802 is configured to invoke the program instructions, and when the program instructions are executed, the processor may be configured to:
projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image;
rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located;
outputting a three-dimensional visualization of the environment surrounding the movable platform on the display interface.
Further, the display interface includes a display area; when the processor 802 outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, it is specifically configured to:
outputting, on a display area on the display interface, a three-dimensional visualization of an environment surrounding the movable platform.
Further, the display interface comprises a display area and a touch area, wherein the touch area comprises an angle display icon and a sensor type graph; when the processor 802 outputs the three-dimensional visualization image of the environment around the movable platform in the display area on the display interface, it is specifically configured to:
acquiring touch operation of a user on an angle display icon and/or a sensor type icon in the touch area;
and generating an angle display instruction corresponding to the touch operation according to the touch operation, and displaying the three-dimensional visual image on a display area of the display interface according to a display visual angle indicated by the angle display instruction.
Further, the touch operation includes at least one of: click operation, slide operation, and drag operation.
Further, when the processor 802 outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, it is specifically configured to:
acquiring the current motion state of the movable platform;
determining a display visual angle of the three-dimensional visual image according to the current motion state;
and displaying the three-dimensional visual image in a display area of the display interface according to the display visual angle.
Further, the current motion state includes a reverse state, and when the processor 802 determines the display view angle of the three-dimensional visual image according to the current motion state, the current motion state is specifically configured to:
and when the movable platform is determined to be in a reversing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a first preset area away from the movable platform.
Further, the current motion state includes a lane change state, and when the processor 802 determines the display view angle of the three-dimensional visualization image according to the current motion state, the current motion state is specifically configured to:
and when the movable platform is determined to be in the lane changing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image of the lane of which the lane is changed in a second preset area.
Further, the current motion state includes an acceleration state, and when the processor 802 determines the display view angle of the three-dimensional visualization image according to the current motion state, the current motion state is specifically configured to:
and when the movable platform is determined to be in an acceleration state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a third preset area away from the movable platform.
Further, the current motion state comprises any one or more of a reversing state, a lane changing state, a braking state, an accelerating state and a decelerating state.
Further, before the processor 802 projects the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing, the processor is further configured to:
acquiring touch operation of a user on a sensor type icon in the touch area;
and determining a target sensor corresponding to the sensor type icon selected by the touch operation, and acquiring three-dimensional point cloud data acquired by the target sensor.
Further, the processor 802 projects the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing, and when a fusion image is obtained, the processor is specifically configured to:
determining a coordinate transformation relationship between the image data and the three-dimensional point cloud data;
converting point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system based on the coordinate conversion relation;
and projecting the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing to obtain a fusion image.
Further, the processor 802 renders the fused image, and when determining the three-dimensional visualization image of the environment where the movable platform is located, is specifically configured to:
projecting the fused image to a two-dimensional plane to obtain at least one projected image;
determining obstacle information of the environment around which the movable platform is located according to the at least one projection image;
and determining the three-dimensional visual image according to the obstacle information.
Further, the obstacle information includes any one or more of position information, size information, and distance information of the obstacle.
Further, the obstacle includes any one or more of a pedestrian, a vehicle, an animal, and a plant.
Further, the sensor includes any one or more of an image sensor, a laser radar, a millimeter wave radar, and an ultrasonic radar.
In the embodiment of the invention, the interactive system can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image, render the fusion image, determine the three-dimensional visual image of the surrounding environment where the movable platform is located, and output the three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface. By means of the embodiment, the safety of the movable platform in the moving process can be improved.
An embodiment of the present invention further provides a movable platform, including: a body; the power system is arranged on the machine body and used for providing moving power for the movable platform; the processor is used for projecting the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image; rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located; outputting a three-dimensional visualization of the environment around which the movable platform is located on a display interface.
Further, the display interface includes a display area; when the processor outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, the processor is specifically configured to:
outputting, on a display area on the display interface, a three-dimensional visualization of an environment surrounding the movable platform.
Further, the display interface comprises a display area and a touch area, wherein the touch area comprises an angle display icon and a sensor type graph; when the processor outputs the three-dimensional visualization image of the environment around the movable platform in the display area on the display interface, the processor is specifically configured to:
acquiring touch operation of a user on an angle display icon and/or a sensor type icon in the touch area;
and generating an angle display instruction corresponding to the touch operation according to the touch operation, and displaying the three-dimensional visual image on a display area of the display interface according to a display visual angle indicated by the angle display instruction.
Further, the touch operation includes at least one of: click operation, slide operation, and drag operation.
Further, when the processor outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, the processor is specifically configured to:
acquiring the current motion state of the movable platform;
determining a display visual angle of the three-dimensional visual image according to the current motion state;
and displaying the three-dimensional visual image in a display area of the display interface according to the display visual angle.
Further, the current motion state includes a reverse state, and the processor is specifically configured to, when determining the display view angle of the three-dimensional visual image according to the current motion state:
and when the movable platform is determined to be in a reversing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a first preset area away from the movable platform.
Further, the current motion state includes a lane change state, and when the processor determines the display view angle of the three-dimensional visual image according to the current motion state, the processor is specifically configured to:
and when the movable platform is determined to be in the lane changing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image of the lane of which the lane is changed in a second preset area.
Further, the current motion state includes an acceleration state, and when the processor determines the display view angle of the three-dimensional visualization image according to the current motion state, the processor is specifically configured to:
and when the movable platform is determined to be in an acceleration state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a third preset area away from the movable platform.
Further, the current motion state comprises any one or more of a reversing state, a lane changing state, a braking state, an accelerating state and a decelerating state.
Further, the processor projects the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing, and before obtaining a fusion image, the processor is further configured to:
acquiring touch operation of a user on a sensor type icon in the touch area;
and determining a target sensor corresponding to the sensor type icon selected by the touch operation, and acquiring three-dimensional point cloud data acquired by the target sensor.
Further, the processor projects the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing, and when a fusion image is obtained, the processor is specifically configured to:
determining a coordinate transformation relationship between the image data and the three-dimensional point cloud data;
converting point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system based on the coordinate conversion relation;
and projecting the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing to obtain a fusion image.
Further, the processor renders the fused image, and when determining the three-dimensional visualization image of the environment around the movable platform, is specifically configured to:
projecting the fused image to a two-dimensional plane to obtain at least one projected image;
determining obstacle information of the environment around which the movable platform is located according to the at least one projection image;
and determining the three-dimensional visual image according to the obstacle information.
Further, the obstacle information includes any one or more of position information, size information, and distance information of the obstacle.
Further, the obstacle includes any one or more of a pedestrian, a vehicle, an animal, and a plant.
Further, the sensor includes any one or more of an image sensor, a laser radar, a millimeter wave radar, and an ultrasonic radar.
In the embodiment of the invention, the movable platform can project the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image, render the fusion image, determine the three-dimensional visual image of the surrounding environment where the movable platform is located, and output the three-dimensional visual image of the surrounding environment where the movable platform is located on the display interface. By means of the embodiment, the safety of the movable platform in the moving process can be improved.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method described in the embodiment of the present invention is implemented, and a system corresponding to the embodiment of the present invention may also be implemented, which is not described herein again.
The computer readable storage medium may be an internal storage unit of the system according to any of the foregoing embodiments, for example, a hard disk or a memory of the system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard drive provided on the device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
It should be noted that certain terms are used throughout the description and claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The description which follows is a preferred embodiment of the invention, but is made for the purpose of illustrating the general principles of the invention and not for the purpose of limiting the scope of the invention. The scope of the present invention is defined by the appended claims.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When an element is referred to as being "electrically connected" to another element, it can be connected by contact, for example, by wires, or by contactless connection, for example, by contactless coupling, by indirect coupling or communication via some interface, device or unit. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The above disclosure is intended to be illustrative of only some embodiments of the invention, and is not intended to limit the scope of the invention.

Claims (46)

1. The interaction method of the movable platform is applied to the movable platform, a display interface is arranged on an interaction system, and the method comprises the following steps:
projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image;
rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located;
outputting a three-dimensional visualization of the environment surrounding the movable platform on the display interface.
2. The method of claim 1, wherein the display interface comprises a display area; the outputting, on the display interface, a three-dimensional visualization of an environment surrounding the movable platform, comprising:
outputting, on a display area on the display interface, a three-dimensional visualization of an environment surrounding the movable platform.
3. The method of claim 1, wherein the display interface comprises a display area and a touch area, the touch area comprising an angle display icon and a sensor type map; the outputting, at a display area on the display interface, a three-dimensional visualization of an environment in which the movable platform is located, including:
acquiring touch operation of a user on an angle display icon and/or a sensor type icon in the touch area;
and generating an angle display instruction corresponding to the touch operation according to the touch operation, and displaying the three-dimensional visual image on a display area of the display interface according to a display visual angle indicated by the angle display instruction.
4. The method of claim 3,
the touch operation comprises at least one of the following steps: click operation, slide operation, and drag operation.
5. The method of claim 1, wherein outputting on the display interface a three-dimensional visualization of the environment surrounding the movable platform comprises:
acquiring the current motion state of the movable platform;
determining a display visual angle of the three-dimensional visual image according to the current motion state;
and displaying the three-dimensional visual image in a display area of the display interface according to the display visual angle.
6. The method of claim 5, wherein the current motion state comprises a reverse state, and wherein determining the display perspective of the three-dimensional visual image based on the current motion state comprises:
and when the movable platform is determined to be in a reversing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a first preset area away from the movable platform.
7. The method of claim 5, wherein the current motion state comprises a lane change state, and wherein determining the display perspective of the three-dimensional visual image based on the current motion state comprises:
and when the movable platform is determined to be in the lane changing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image of the lane of which the lane is changed in a second preset area.
8. The method of claim 5, wherein the current motion state comprises an acceleration state, and wherein determining the display perspective of the three-dimensional visual image based on the current motion state comprises:
and when the movable platform is determined to be in an acceleration state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a third preset area away from the movable platform.
9. The method of claim 5,
the current motion state comprises any one or more of a reversing state, a lane changing state, a braking state, an accelerating state and a decelerating state.
10. The method according to claim 1, wherein before projecting the three-dimensional point cloud data collected by the sensor into the image data collected by the camera for fusion processing to obtain a fused image, the method further comprises:
acquiring touch operation of a user on a sensor type icon in the touch area;
and determining a target sensor corresponding to the sensor type icon selected by the touch operation, and acquiring three-dimensional point cloud data acquired by the target sensor.
11. The method according to claim 1, wherein the projecting the three-dimensional point cloud data collected by the sensor into the image data collected by the camera for fusion processing to obtain a fused image comprises:
determining a coordinate transformation relationship between the image data and the three-dimensional point cloud data;
converting point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system based on the coordinate conversion relation;
and projecting the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing to obtain a fusion image.
12. The method of claim 1, wherein said rendering the fused image to determine a three-dimensional visualization of the environment surrounding the movable platform comprises:
projecting the fused image to a two-dimensional plane to obtain at least one projected image;
determining obstacle information of the environment around which the movable platform is located according to the at least one projection image;
and determining the three-dimensional visual image according to the obstacle information.
13. The method according to claim 12, wherein the obstacle information includes any one or more of position information, size information, and distance information of the obstacle.
14. The method of claim 13, wherein the obstacle comprises any one or more of a pedestrian, a vehicle, an animal, a plant.
15. The method of claim 1,
the sensor comprises any one or more of an image sensor, a laser radar, a millimeter wave radar and an ultrasonic radar.
16. An interactive system, characterized in that, be applied to portable platform, be provided with the display interface on the interactive system, the system includes: one or more processors, working collectively or individually, the processors being configured to:
projecting the three-dimensional point cloud data acquired by the sensor into image data acquired by the camera for fusion processing to obtain a fusion image;
rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located;
outputting a three-dimensional visualization of the environment surrounding the movable platform on the display interface.
17. The system of claim 16, wherein the display interface comprises a display area; when the processor outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, the processor is specifically configured to:
outputting, on a display area on the display interface, a three-dimensional visualization of an environment surrounding the movable platform.
18. The system of claim 16, wherein the display interface comprises a display area and a touch area, the touch area comprising an angle display icon and a sensor type map; when the processor outputs the three-dimensional visualization image of the environment around the movable platform in the display area on the display interface, the processor is specifically configured to:
acquiring touch operation of a user on an angle display icon and/or a sensor type icon in the touch area;
and generating an angle display instruction corresponding to the touch operation according to the touch operation, and displaying the three-dimensional visual image on a display area of the display interface according to a display visual angle indicated by the angle display instruction.
19. The system of claim 18,
the touch operation comprises at least one of the following steps: click operation, slide operation, and drag operation.
20. The system of claim 16, wherein the processor, when outputting on the display interface the three-dimensional visualization of the environment surrounding the movable platform, is configured to:
acquiring the current motion state of the movable platform;
determining a display visual angle of the three-dimensional visual image according to the current motion state;
and displaying the three-dimensional visual image in a display area of the display interface according to the display visual angle.
21. The system of claim 20, wherein the current motion state comprises a reverse state, and the processor is configured to, when determining the display perspective of the three-dimensional visualization image according to the current motion state, specifically:
and when the movable platform is determined to be in a reversing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a first preset area away from the movable platform.
22. The system of claim 20, wherein the current motion state comprises a lane change state, and wherein the processor is configured to, when determining the display perspective of the three-dimensional visual image according to the current motion state, specifically:
and when the movable platform is determined to be in the lane changing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image of the lane of which the lane is changed in a second preset area.
23. The system according to claim 20, wherein the current motion state comprises an acceleration state, and the processor is configured to, when determining the display perspective of the three-dimensional visualization image according to the current motion state, specifically:
and when the movable platform is determined to be in an acceleration state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a third preset area away from the movable platform.
24. The system of claim 20,
the current motion state comprises any one or more of a reversing state, a lane changing state, a braking state, an accelerating state and a decelerating state.
25. The system of claim 16, wherein the processor projects the three-dimensional point cloud data collected by the sensor into the image data collected by the camera for fusion processing, and before obtaining the fused image, is further configured to:
acquiring touch operation of a user on a sensor type icon in the touch area;
and determining a target sensor corresponding to the sensor type icon selected by the touch operation, and acquiring three-dimensional point cloud data acquired by the target sensor.
26. The system of claim 16, wherein the processor projects the three-dimensional point cloud data collected by the sensor into the image data collected by the camera for fusion processing, and when obtaining the fusion image, the processor is specifically configured to:
determining a coordinate transformation relationship between the image data and the three-dimensional point cloud data;
converting point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system based on the coordinate conversion relation;
and projecting the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing to obtain a fusion image.
27. The system of claim 16, wherein the processor, when rendering the fused image and determining the three-dimensional visualization of the environment around the movable platform, is configured to:
projecting the fused image to a two-dimensional plane to obtain at least one projected image;
determining obstacle information of the environment around which the movable platform is located according to the at least one projection image;
and determining the three-dimensional visual image according to the obstacle information.
28. The system of claim 27, wherein the obstacle information comprises any one or more of position information, size information, and distance information of the obstacle.
29. The system of claim 28, wherein the obstacle comprises any one or more of a pedestrian, a vehicle, an animal, a plant.
30. The system of claim 16,
the sensor comprises any one or more of an image sensor, a laser radar, a millimeter wave radar and an ultrasonic radar.
31. A movable platform, comprising:
a body;
the power system is arranged on the machine body and used for providing moving power for the movable platform;
the processor is used for projecting the three-dimensional point cloud data acquired by the sensor into the image data acquired by the camera for fusion processing to obtain a fusion image; rendering the fused image, and determining a three-dimensional visual image of the surrounding environment where the movable platform is located; outputting a three-dimensional visualization of the environment around which the movable platform is located on a display interface.
32. The movable platform of claim 31, wherein the display interface comprises a display area; when the processor outputs the three-dimensional visualization image of the environment around the movable platform on the display interface, the processor is specifically configured to:
outputting, on a display area on the display interface, a three-dimensional visualization of an environment surrounding the movable platform.
33. The movable platform of claim 31, wherein the display interface comprises a display area and a touch area, the touch area comprising an angle display icon and a sensor type map; when the processor outputs the three-dimensional visualization image of the environment around the movable platform in the display area on the display interface, the processor is specifically configured to:
acquiring touch operation of a user on an angle display icon and/or a sensor type icon in the touch area;
and generating an angle display instruction corresponding to the touch operation according to the touch operation, and displaying the three-dimensional visual image on a display area of the display interface according to a display visual angle indicated by the angle display instruction.
34. The movable platform of claim 33,
the touch operation comprises at least one of the following steps: click operation, slide operation, and drag operation.
35. The movable platform of claim 31, wherein the processor, when outputting on the display interface a three-dimensional visualization of an environment surrounding the movable platform, is configured to:
acquiring the current motion state of the movable platform;
determining a display visual angle of the three-dimensional visual image according to the current motion state;
and displaying the three-dimensional visual image in a display area of the display interface according to the display visual angle.
36. The movable platform of claim 35, wherein the current motion state comprises a reverse state, and the processor is configured to, when determining the display perspective of the three-dimensional visual image according to the current motion state, specifically:
and when the movable platform is determined to be in a reversing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a first preset area away from the movable platform.
37. The movable platform of claim 35, wherein the current motion state comprises a lane change state, and wherein the processor is configured to, when determining the display perspective of the three-dimensional visual image according to the current motion state, specifically:
and when the movable platform is determined to be in the lane changing state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image of the lane of which the lane is changed in a second preset area.
38. The movable platform of claim 35, wherein the current motion state comprises an acceleration state, and wherein the processor is configured to, when determining the display perspective of the three-dimensional visual image based on the current motion state, in particular:
and when the movable platform is determined to be in an acceleration state, determining that the display visual angle of the three-dimensional visual image is the three-dimensional visual image in a third preset area away from the movable platform.
39. The movable platform of claim 35,
the current motion state comprises any one or more of a reversing state, a lane changing state, a braking state, an accelerating state and a decelerating state.
40. The movable platform of claim 31, wherein the processor projects the three-dimensional point cloud data collected by the sensor into the image data collected by the camera for fusion processing, and before obtaining the fused image, is further configured to:
acquiring touch operation of a user on a sensor type icon in the touch area;
and determining a target sensor corresponding to the sensor type icon selected by the touch operation, and acquiring three-dimensional point cloud data acquired by the target sensor.
41. The movable platform of claim 31, wherein the processor projects the three-dimensional point cloud data collected by the sensor into image data collected by the camera for fusion processing, and when obtaining a fused image, is specifically configured to:
determining a coordinate transformation relationship between the image data and the three-dimensional point cloud data;
converting point cloud data corresponding to the image data and the three-dimensional point cloud data into the same coordinate system based on the coordinate conversion relation;
and projecting the three-dimensional point cloud data converted into the same coordinate system into the image data for fusion processing to obtain a fusion image.
42. The movable platform of claim 31, wherein the processor is configured to render the fused image and, when determining the three-dimensional visualization of the environment around which the movable platform is located, in particular:
projecting the fused image to a two-dimensional plane to obtain at least one projected image;
determining obstacle information of the environment around which the movable platform is located according to the at least one projection image;
and determining the three-dimensional visual image according to the obstacle information.
43. The movable platform of claim 42, wherein the obstacle information comprises any one or more of position information, size information, and distance information of the obstacle.
44. The movable platform of claim 43, wherein the obstacles comprise any one or more of pedestrians, vehicles, animals, plants.
45. The movable platform of claim 31,
the sensor comprises any one or more of an image sensor, a laser radar, a millimeter wave radar and an ultrasonic radar.
46. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 15.
CN201980008059.6A 2019-06-26 Interaction method and system of movable platform, movable platform and storage medium Active CN111683840B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/092989 WO2020258073A1 (en) 2019-06-26 2019-06-26 Interaction method and system for movable platform, movable platform, and storage medium

Publications (2)

Publication Number Publication Date
CN111683840A true CN111683840A (en) 2020-09-18
CN111683840B CN111683840B (en) 2024-04-30

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756162A (en) * 2021-01-05 2022-07-15 成都极米科技股份有限公司 Touch system and method, electronic device and computer readable storage medium
CN114816169A (en) * 2022-06-29 2022-07-29 荣耀终端有限公司 Desktop icon display method and device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
US20120218125A1 (en) * 2011-02-28 2012-08-30 Toyota Motor Engin. & Manufact. N.A.(TEMA) Two-way video and 3d transmission between vehicles and system placed on roadside
US9098754B1 (en) * 2014-04-25 2015-08-04 Google Inc. Methods and systems for object detection using laser point clouds
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN107972585A (en) * 2017-11-30 2018-05-01 惠州市德赛西威汽车电子股份有限公司 Scene rebuilding System and method for is looked around with reference to the adaptive 3 D of radar information
KR20180066618A (en) * 2016-12-09 2018-06-19 (주)엠아이테크 Registration method of distance data and 3D scan data for autonomous vehicle and method thereof
CN109085598A (en) * 2018-08-13 2018-12-25 吉利汽车研究院(宁波)有限公司 Detection system for obstacle for vehicle
CN109683170A (en) * 2018-12-27 2019-04-26 驭势科技(北京)有限公司 A kind of image traveling area marking method, apparatus, mobile unit and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120218125A1 (en) * 2011-02-28 2012-08-30 Toyota Motor Engin. & Manufact. N.A.(TEMA) Two-way video and 3d transmission between vehicles and system placed on roadside
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
US9098754B1 (en) * 2014-04-25 2015-08-04 Google Inc. Methods and systems for object detection using laser point clouds
KR20180066618A (en) * 2016-12-09 2018-06-19 (주)엠아이테크 Registration method of distance data and 3D scan data for autonomous vehicle and method thereof
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN107972585A (en) * 2017-11-30 2018-05-01 惠州市德赛西威汽车电子股份有限公司 Scene rebuilding System and method for is looked around with reference to the adaptive 3 D of radar information
CN109085598A (en) * 2018-08-13 2018-12-25 吉利汽车研究院(宁波)有限公司 Detection system for obstacle for vehicle
CN109683170A (en) * 2018-12-27 2019-04-26 驭势科技(北京)有限公司 A kind of image traveling area marking method, apparatus, mobile unit and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114756162A (en) * 2021-01-05 2022-07-15 成都极米科技股份有限公司 Touch system and method, electronic device and computer readable storage medium
CN114756162B (en) * 2021-01-05 2023-09-05 成都极米科技股份有限公司 Touch system and method, electronic device and computer readable storage medium
CN114816169A (en) * 2022-06-29 2022-07-29 荣耀终端有限公司 Desktop icon display method and device and storage medium

Also Published As

Publication number Publication date
WO2020258073A1 (en) 2020-12-30
US11922583B2 (en) 2024-03-05
US20210366202A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
CN109278742B (en) Vehicle and automatic parking method and system
CN110794970B (en) Three-dimensional display method and system of automatic parking interface and vehicle
CN111674380B (en) Remote vehicle moving system, method, vehicle and storage medium
WO2020258073A1 (en) Interaction method and system for movable platform, movable platform, and storage medium
CN109733284B (en) Safe parking auxiliary early warning method and system applied to vehicle
WO2017158768A1 (en) Vehicle control system, vehicle control method, and vehicle control program
US11321911B2 (en) Method for representing the surroundings of a vehicle
WO2015156821A1 (en) Vehicle localization system
DE112018004507T5 (en) INFORMATION PROCESSING DEVICE, MOTION DEVICE AND METHOD AND PROGRAM
JP6365385B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
CN105684039B (en) Condition analysis for driver assistance systems
CN111216127A (en) Robot control method, device, server and medium
US11082616B2 (en) Overlooking image generation system for vehicle and method thereof
CN113168691A (en) Information processing device, information processing method, program, mobile body control device, and mobile body
CN107139918A (en) A kind of vehicle collision reminding method and vehicle
CN107117099A (en) A kind of vehicle collision reminding method and vehicle
JP2019109707A (en) Display control device, display control method and vehicle
CN112124092A (en) Parking assist system
CN115239548A (en) Target detection method, target detection device, electronic device, and medium
CN113895429A (en) Automatic parking method, system, terminal and storage medium
CN112356850A (en) Early warning method and device based on blind spot detection and electronic equipment
CN111683840B (en) Interaction method and system of movable platform, movable platform and storage medium
CN112106017A (en) Vehicle interaction method, device and system and readable storage medium
CN107925747B (en) Image processing device for vehicle
CN115179924A (en) Display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant