CN112389459A - Man-machine interaction method and device based on panoramic looking-around - Google Patents

Man-machine interaction method and device based on panoramic looking-around Download PDF

Info

Publication number
CN112389459A
CN112389459A CN202011109986.8A CN202011109986A CN112389459A CN 112389459 A CN112389459 A CN 112389459A CN 202011109986 A CN202011109986 A CN 202011109986A CN 112389459 A CN112389459 A CN 112389459A
Authority
CN
China
Prior art keywords
screen
user
panoramic
coordinate
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011109986.8A
Other languages
Chinese (zh)
Other versions
CN112389459B (en
Inventor
陈曲
张坤雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiways Automobile Shanghai Co Ltd
Original Assignee
Aiways Automobile Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiways Automobile Shanghai Co Ltd filed Critical Aiways Automobile Shanghai Co Ltd
Priority to CN202011109986.8A priority Critical patent/CN112389459B/en
Publication of CN112389459A publication Critical patent/CN112389459A/en
Priority to PCT/CN2021/123882 priority patent/WO2022078464A1/en
Application granted granted Critical
Publication of CN112389459B publication Critical patent/CN112389459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system

Abstract

The invention discloses a man-machine interaction method and a man-machine interaction device based on panoramic looking around, wherein the method comprises the following steps: receiving the operation of a user on the panoramic all-around image triggered by the screen of the vehicle-mounted device, and acquiring corresponding operation information; the operation information comprises at least one screen coordinate; displaying at least one driving function according to the operation information for selection by a user; converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate; and controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user. The invention solves the problem that the prior common panoramic all-round looking system can not directly interact with the driving layer of the user, realizes the correspondence of the screen coordinate displayed on the screen plane with the world coordinate of the 3D space in the real world by positioning, converting and the like the screen coordinate selected by the user, and is convenient for the user to control the vehicle based on the displayed panoramic all-round looking image to complete the corresponding driving function.

Description

Man-machine interaction method and device based on panoramic looking-around
Technical Field
The invention relates to the field of artificial intelligence, in particular to a man-machine interaction method and device based on panoramic looking around.
Background
A panoramic all-round looking system is a system for users to check the conditions around a vehicle body by fusing information of vehicle-mounted sensors such as a camera and radar (such as millimeter wave radar and ultrasonic radar) and displaying auxiliary information such as lane lines and object frames on a vehicle screen, wherein the actual scenes can cover 360 degrees around the vehicle to the maximum extent. The virtual visualization system is a system for displaying information such as lane lines, objects, signs and the like on a vehicle screen in a virtual model manner for a user to view by fusing information of a vehicle-mounted sensor, and is mainly used for displaying the condition in front of a road at present.
The 2D or 3D around-looking system of the real scene can only display the environment around the automobile for the user to view, but cannot directly perform corresponding real-world interaction with the user based on the visual image in the around-looking system. The virtual visualization system can perform partial interaction, such as parking space selection in the automatic parking process, but the virtual visualization system cannot display real image information, is not wide enough in view field, increases the difficulty of communication between a user and the virtual visualization system, and limits the application range of the virtual visualization system.
In addition, in the transition stage from the conventional automobile auxiliary driving to the automatic driving, many functions can only solve part of problems, and the intention of a driver still occupies an important component in driving. Simple scenes such as lane changing, overtaking and turning can be executed by only keys or simple instructions, but the driving intention of a user is difficult to describe clearly and accurately, so that the complicated intention needs to be realized by instruction stacking or is difficult to realize. For example, following a vehicle, the current adaptive cruise function can only follow a vehicle right ahead, and if the current adaptive cruise function wants to follow other vehicles, the prior art has significant limitations on implementation and interaction; for another example, on a road without an auxiliary line, the prior art also has a great limitation on interactive implementation when the vehicle wants to run or stop at a specified position.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a panorama look-around based human-machine interaction method and apparatus that overcomes or at least partially solves the above problems.
According to one aspect of the invention, a man-machine interaction method based on panoramic looking around is provided, which comprises the following steps:
receiving the operation of a user on the panoramic all-around image triggered by the screen of the vehicle-mounted device, and acquiring corresponding operation information; the operation information comprises at least one screen coordinate;
displaying at least one driving function according to the operation information for selection by a user;
converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate;
and controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user.
According to another aspect of the present invention, there is provided a human-computer interaction device based on panoramic looking around, comprising:
the receiving module is suitable for receiving the operation of the panoramic all-around image triggered by the user on the screen of the vehicle-mounted device and acquiring corresponding operation information; the operation information comprises at least one screen coordinate;
the display module is suitable for displaying at least one driving function according to the operation information for selection by a user;
the conversion module is suitable for converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate;
and the control module is suitable for controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user.
According to yet another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the man-machine interaction method based on the panoramic looking-around.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the above-mentioned human-computer interaction method based on panoramic looking around.
According to the man-machine interaction method and device based on the panoramic all-around view, provided by the invention, the operation of a user on the panoramic all-around view image triggered on the screen of the vehicle-mounted device is received, and corresponding operation information is obtained; the operation information comprises at least one screen coordinate; displaying at least one driving function according to the operation information for selection by a user; converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate; and controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user. The invention solves the problem that the prior common panoramic all-round looking system can not directly interact with the driving layer of the user, realizes the correspondence of the screen coordinate displayed on the screen plane with the world coordinate of the 3D space in the real world by positioning, converting and the like the screen coordinate selected by the user, is convenient for the user to control the vehicle based on the displayed panoramic all-round looking image to complete the corresponding driving function, provides the user with the personalized auxiliary driving function, and is suitable for the scenes of auxiliary user driving, automatic driving and the like. And the interaction with the user is simple and easy, the interaction process is simplified, and the use difficulty of the user is reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a man-machine interaction method based on panoramic looking-around according to one embodiment of the invention;
FIG. 2 is a block diagram of a human-computer interaction device based on panoramic looking around according to an embodiment of the invention;
FIG. 3 shows a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart illustrating a man-machine interaction method based on panoramic looking around according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
and step S101, receiving the operation of the panoramic all-around image triggered by the user on the screen of the vehicle-mounted device, and acquiring corresponding operation information.
The panoramic looking-around image is generated by fusing images around the vehicle, which are acquired by the vehicle-mounted sensor. The vehicle-mounted sensor includes, for example, a vehicle-mounted camera, a vehicle-mounted radar (e.g., a millimeter wave radar, an ultrasonic radar, etc.), and the like. The in-vehicle sensor can acquire an image of a real scene covering as much as 360 degrees around the vehicle, the image containing information such as lane lines, objects around the vehicle, signs, and the like. The images of the real scene are fused according to different areas where the real scene is located, and specifically, various information contained in the images collected by the vehicle-mounted sensor can be fused by using a virtual visualization system, so that information such as lane lines, objects around the vehicle, signs and the like is displayed on a vehicle screen in a virtual model mode for a user to check, and the user can clearly know the environment where the vehicle is located and know the current condition of the road and the like. Besides the images collected by the vehicle-mounted sensor, the panoramic all-around image also records at least one object information and the screen coordinate of at least one object in the panoramic all-around image. Specifically, when the panoramic all-around image is obtained by fusing the images, the virtual visualization system identifies the images, identifies the objects contained in the images, and records object information in the panoramic all-around image. And if the objects contained in the images are identified to be lane lines, other vehicles, trees, signs, parking spaces and the like, recording object information. Further, when the virtual visualization system fuses the images to obtain a panoramic all-around image, the position information of each object in the panoramic all-around image is identified, and the position of each object in the panoramic all-around image, namely the screen coordinate when the screen of the vehicle-mounted device is displayed, is recorded.
After the panoramic all-round view image is obtained, the panoramic all-round view image is displayed to a user by using a vehicle machine screen of the vehicle, so that the user can conveniently check the surrounding conditions of the vehicle. Furthermore, the car screen can receive the operation of the panoramic all-around image triggered by the user on the car screen besides displaying the panoramic all-around image, and acquire the operation information of the user. The operation information comprises at least one screen coordinate, and specifically comprises screen coordinates of each point triggered by the user on the screen of the car machine.
Compared with a 2D or 3D all-round-looking system which can only display the surrounding scene of the vehicle, the system can help a user to realize interaction with a real scene based on the panoramic all-round-looking image. And the intention of the user is better determined according to the operation of the panoramic all-around image triggered by the user on the screen of the car machine.
And S102, displaying at least one driving function according to the operation information for selection by the user.
The operation of the user is recognized based on the screen coordinates included in the operation information. Specifically, the screen coordinates included in the operation information are compared to determine user operations, such as frame selection operations and line drawing operations. When the screen coordinates contained in the operation information form a closed loop, the operation can be determined as a frame selection operation; when the screen coordinates included in the operation information form a straight line or a curved line, the operation may be determined as a line drawing operation, and the like, which is not limited herein. After the operation of the user is identified and determined, object information related to the operation is acquired. And obtaining the object information related to the operation according to the object framed and selected in the framing operation, or obtaining the object information related to the operation according to the object connected or penetrated by the line in the marking operation. Further, if no object is involved in either the frame selection operation or the line drawing operation, the object information involved in the operation is null.
And providing at least one driving function corresponding to the operation to the user according to the identified operation and the object information related to the operation. According to the identified frame selection operation, the vehicle selected by the frame selection operation frame is combined, so that the driving functions of tracking, overtaking, avoiding and the like are provided for the user, and the user can conveniently select; or providing a driving function for parking in place for a user according to the identified frame selection operation and the parking space selected by the frame selection operation frame; according to the recognized marking operation, the driving functions of avoiding, overtaking and the like are provided for the user by combining the vehicle related to the marking operation; the lane marking associated with the marking operation is combined with the identified marking operation, and the lane keeping, lane changing, lane line-free road driving, specified position parking and other driving functions are provided for the user. The above are all illustrations, and the corresponding driving function is specifically set according to the implementation situation, which is not limited herein.
Furthermore, the display of the driving function can be displayed to the user in modes such as vehicle-mounted display screen display, voice broadcast and the like, and the user can determine the corresponding driving function in modes such as selection or voice instruction and the like.
Step S103, converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate.
After the user operation and the driving function selected by the user are determined, at least one screen coordinate included in the operation information needs to be converted to obtain at least one corresponding world coordinate. World coordinates, i.e., coordinates of a real scene, are used for the vehicle to perform driving functions. And converting the screen coordinate into a world coordinate, specifically, firstly constructing a screen coordinate system and a world coordinate system so as to convert the screen coordinate and the world coordinate system. The panoramic all-around image is synthesized from images acquired by the in-vehicle sensor, and is obtained by projecting the images acquired by the in-vehicle sensor onto a curved surface in a real world coordinate system and observing the images with a virtual camera (in-vehicle sensor) in the real world coordinate system. Screen coordinates in panoramic imagery in vscreenIs represented by vscreen=[Xvc,Yvc]TWhich represents a vector of screen coordinates of any point in the panoramic looking-around image, the X coordinate of which is XvcWith the Y coordinate being Yvc. The X coordinate and the Y coordinate are perpendicular to each other. For example, the X coordinate may be a horizontal coordinate, and the Y coordinate may be a vertical coordinate. World coordinates in a world coordinate systemvworldIs represented by vworld=[X3D,Y3D,Z3D]TWhich represents a vector of world coordinates of any point in the world coordinate system. With X coordinate X3DWith the Y coordinate being Y3DZ coordinate is Z3D. The world coordinate system determines the coordinates of the object in the real world. Wherein the X coordinate, the Y coordinate and the Z coordinate are mutually vertical. If the X coordinate is a horizontal coordinate and the Y coordinate is a vertical coordinate, the Z coordinate direction is a vertical direction of a plane formed by the X coordinate and the Y coordinate. The conversion between the screen coordinate system and the world coordinate system is based on the formula α PV [ X ]3D,Y3D,Z3D,1]T=[Xvc,Yvc,1]T. Wherein the screen coordinate v in the formulascreenAnd world coordinate vworldIs extended by 1 dimension for the corresponding matrix operation. V is a camera pose matrix of 4x3 representing the conversion of world coordinates to virtual camera coordinates, and P is a camera perspective transformation matrix of 3x3, and when the world coordinate system and virtual camera position, rotation direction and parameters (parameters related to characteristics of the virtual camera itself such as focal length, pixels, etc.) are determined, the corresponding P and V can be determined according to the virtual camera position, rotation direction and virtual camera parameters and the world coordinate system. If P is determined from the parameters of the virtual camera, V is determined from the virtual camera position, the direction of rotation. And establishing a corresponding relation between a world coordinate system and a screen coordinate system through the camera attitude matrix and the camera perspective transformation matrix. Any point v according to the screen coordinatesscreenFrom the formula, the formula can be obtainedworldAnd determining a straight line in a world coordinate system. At this time, the direction vector of the straight line is determined, but vscreenThe specific position corresponding to the straight line has not yet been determined. For at least one screen coordinate contained in the operation information, a corresponding straight line of the at least one screen coordinate in the world coordinate system can be determined by using the camera attitude matrix and the camera perspective transformation matrix included in the formula. Further, v needs to be determinedscreenCorresponding to a specific position in the world coordinate system, namely the intersection point of a straight line obtained by a calculation formula and a curved surface in the world coordinate system to obtain vscreenCorresponding point v on the curved surfacesurfaceNamely the corresponding world coordinates after the screen coordinates are converted. v. ofsurfaceExpressed as v in a vector mannersurface=[Xsurface,Ysurface,Zsurface]T。vsurfaceAnd vworldThe relationship between is vsurface=ηvworldEta is a parameter when a straight line intersects a curved surface, and represents vsurfaceAnd vworldCo-linear. In the calculation, any point v on the straight line can be selectedworldWill be η vworldSubstituting into the curved surface equation to solve to obtain eta, and calculating to obtain vsurface. Equation of the curved surface is fsurface(Xsurface,Ysurface,Zsurface) 0. And calculating by using a curved surface equation formula, and determining the intersection point of the straight line and the curved surface so as to obtain the world coordinate corresponding to the screen coordinate.
When the method is particularly applied to the embodiment, when a non-ground plane object exists in at least one screen coordinate included in the operation information, such as a vehicle, a sign and the like, v is set in advance based on the world coordinate system because the curved surface is set in advancesurfaceThe difference exists between the real world coordinates and the real world coordinates, and the v is also needed to be based on the vehicle-mounted sensor pairsurfaceCarrying out coordinate data fusion to obtain the corrected object coordinate vrefined. Specifically, methods such as coordinate weighted summation, angle-based coordinate fusion, and the like can be used. v. ofrefined=ffuse(vsurface,vradar,vultrasonic,vsensor3…)。ffuseAs a fusion function, vradar,vultrasonic,vsensor3… are coordinate vector data of objects corresponding to each vehicle-mounted sensor. And obtaining the world coordinates of the world coordinate system after correction through the fusion of the coordinate vector data of the object with each vehicle-mounted sensor. Taking the framed object as an example, the coordinate of the framed object corresponding to the curved surface of the world coordinate is [ X ]surface,Ysurface,Zsurface]TNeglecting Z coordinate, i.e. distance information, in projection to obtain [ X [)surface,Ysurface]TThe target frame shape of the object can be determined through the 2 projection points
Figure BDA0002728281210000071
Target framing based on-vehicle sensor determination
Figure BDA0002728281210000072
And matching in a cross-over ratio mode, and when the number of the target frame shapes is larger than a preset threshold value, considering that the target frame shapes A and B are the same target frame shape. When A ^ B/A ^ B is greater than preset threshold value, the target frame shapes A and B are considered as same target frame shape so as to obtain the coordinate of framed object
Figure BDA0002728281210000073
The object can be conveniently positioned according to the coordinate of the framed object, so that a driving route required by the driving functions such as tracking, exceeding and avoiding based on the object can be planned according to the object. Further, when a plurality of objects are framed, an object set O ═ O can be obtained1,o2…on}. And planning a driving route required by the driving functions of tracking, exceeding, avoiding and the like based on each object for each object in the object set.
When the screen coordinates contained in the operation information are all ground plane objects, such as lane lines selected by line drawing operation, the Y coordinates of all points in the world coordinate system can be directly determined to be located on the ground. Taking the lineation operation as an example, the curved surface equation can be directly degenerated into the plane equation, the coordinates of each point are determined, and the coordinate set of each point is obtained, so that the driving route can be conveniently planned according to the coordinate set of each point.
And step S104, controlling the vehicle to execute a corresponding driving function according to at least one world coordinate and the driving function selected by the user.
And transmitting the at least one world coordinate obtained by conversion and the driving function selected by the user to a vehicle planning control system, and controlling the vehicle to execute the driving function by the vehicle planning control system according to the world coordinate, such as avoiding other vehicles selected by the user according to the world coordinate, driving according to a route specified by the line drawing operation of the user according to the world coordinate, and the like.
Furthermore, if the driving function selected by the user cannot be executed according to the world coordinate, the user can be prompted, and the user can conveniently select other driving functions in time.
According to the man-machine interaction method based on the panoramic all-around view, provided by the invention, the operation of a user on the panoramic all-around view image triggered on the screen of the vehicle-mounted device is received, and corresponding operation information is obtained; the operation information comprises at least one screen coordinate; displaying at least one driving function according to the operation information for selection by a user; converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate; and controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user. The invention solves the problem that the prior common panoramic all-round looking system can not directly interact with the driving layer of the user, realizes the correspondence of the screen coordinate displayed on the screen plane with the world coordinate of the 3D space in the real world by positioning, converting and the like the screen coordinate selected by the user, is convenient for the user to control the vehicle based on the displayed panoramic all-round looking image to complete the corresponding driving function, provides the user with the personalized auxiliary driving function, and is suitable for the scenes of auxiliary user driving, automatic driving and the like. And the interaction with the user is simple and easy, the interaction process is simplified, and the use difficulty of the user is reduced.
Fig. 2 is a block diagram illustrating a structure of a man-machine interaction device based on panoramic looking-around according to an embodiment of the invention, and as shown in fig. 2, the device includes:
the receiving module 210 is adapted to receive an operation of the panoramic all-around image triggered by the user on the screen of the vehicle-mounted device, and acquire corresponding operation information; the operation information comprises at least one screen coordinate;
a display module 220 adapted to display at least one driving function for selection by a user according to the operation information;
the conversion module 230 is adapted to convert at least one screen coordinate included in the operation information to obtain at least one corresponding world coordinate;
and the control module 240 is adapted to control the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user.
Optionally, the panoramic looking-around image is generated by fusing images of the periphery of the vehicle, which are acquired by the vehicle-mounted sensor; and recording at least one object information and the screen coordinates of each object in the panoramic all-around image.
Optionally, the display module 220 is further adapted to: identifying the operation of the user according to the screen coordinates contained in the operation information; the operation comprises a frame selection operation and/or a line drawing operation; and providing at least one driving function corresponding to the operation to the user according to the identified operation and the object information related to the operation.
Optionally, the conversion module 230 is further adapted to: constructing a conversion corresponding relation between a screen coordinate system and a world coordinate system; and calculating to obtain at least one world coordinate according to the at least one screen coordinate and the conversion corresponding relation.
The descriptions of the modules refer to the corresponding descriptions in the method embodiments, and are not repeated herein.
The invention also provides a nonvolatile computer storage medium, and the computer storage medium stores at least one executable instruction which can execute the man-machine interaction method based on the panoramic all-around view in any method embodiment.
Fig. 3 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 3, the computing device may include: a processor (processor)302, a communication Interface 304, a memory 306, and a communication bus 308.
Wherein:
the processor 302, communication interface 304, and memory 306 communicate with each other via a communication bus 308.
A communication interface 304 for communicating with network elements of other devices, such as clients or other servers.
The processor 302 is configured to execute the program 310, and may specifically perform relevant steps in the above-described man-machine interaction method embodiment based on panoramic looking-around.
In particular, program 310 may include program code comprising computer operating instructions.
The processor 302 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 306 for storing a program 310. Memory 306 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 310 may be specifically configured to enable the processor 302 to execute the panoramic surround view-based human-computer interaction method in any of the above-described method embodiments. For specific implementation of each step in the program 310, reference may be made to corresponding steps and corresponding descriptions in units in the above-described man-machine interaction embodiment based on panoramic around, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A man-machine interaction method based on panoramic looking around comprises the following steps:
receiving the operation of a user on the panoramic all-around image triggered by the screen of the vehicle-mounted device, and acquiring corresponding operation information; the operation information comprises at least one screen coordinate;
displaying at least one driving function according to the operation information for selection by a user;
converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate;
and controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user.
2. The method of claim 1, wherein the panoramic surround view image is generated from image fusion of the vehicle surroundings acquired by an on-board sensor; and recording at least one object information in the panoramic all-around image and the screen coordinate of at least one object in the panoramic all-around image.
3. The method of claim 1, wherein the presenting at least one driving function for selection by a user according to the operational information further comprises:
identifying the operation of the user according to the screen coordinates contained in the operation information; the operation comprises a frame selection operation and/or a line drawing operation;
and providing at least one driving function corresponding to the operation to a user according to the identified operation and the object information related to the operation.
4. The method according to any one of claims 1-3, wherein the converting at least one screen coordinate included in the operation information to obtain at least one corresponding world coordinate further comprises:
constructing a screen coordinate system and a world coordinate system;
for the at least one screen coordinate, determining a straight line corresponding to the at least one screen coordinate in a world coordinate system by using a camera attitude matrix and a camera perspective transformation matrix;
and calculating the intersection point of the straight line and the curved surface in the world coordinate system to obtain at least one world coordinate corresponding to the at least one screen coordinate.
5. A man-machine interaction device based on panoramic looking around comprises:
the receiving module is suitable for receiving the operation of the panoramic all-around image triggered by the user on the screen of the vehicle-mounted device and acquiring corresponding operation information; the operation information comprises at least one screen coordinate;
the display module is suitable for displaying at least one driving function according to the operation information for selection by a user;
the conversion module is suitable for converting at least one screen coordinate contained in the operation information to obtain at least one corresponding world coordinate;
and the control module is suitable for controlling the vehicle to execute the corresponding driving function according to the at least one world coordinate and the driving function selected by the user.
6. The apparatus of claim 5, wherein the panoramic surround view image is generated from image fusion of the vehicle surroundings acquired by a vehicle-mounted sensor; and recording at least one object information in the panoramic all-around image and the screen coordinates of each object in the panoramic all-around image.
7. The apparatus of claim 5, wherein the display module is further adapted to:
identifying the operation of the user according to the screen coordinates contained in the operation information; the operation comprises a frame selection operation and/or a line drawing operation;
and providing at least one driving function corresponding to the operation to a user according to the identified operation and the object information related to the operation.
8. The apparatus of any of claims 5-7, wherein the conversion module is further adapted to:
constructing a screen coordinate system and a world coordinate system;
for the at least one screen coordinate, determining a straight line corresponding to the at least one screen coordinate in a world coordinate system by using a camera attitude matrix and a camera perspective transformation matrix;
and calculating the intersection point of the straight line and the curved surface in the world coordinate system to obtain at least one world coordinate corresponding to the at least one screen coordinate.
9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the panoramic all-around based man-machine interaction method according to any one of claims 1-4.
10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the panoramic all-around based human-computer interaction method according to any one of claims 1 to 4.
CN202011109986.8A 2020-10-16 2020-10-16 Man-machine interaction method and device based on panoramic looking-around Active CN112389459B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011109986.8A CN112389459B (en) 2020-10-16 2020-10-16 Man-machine interaction method and device based on panoramic looking-around
PCT/CN2021/123882 WO2022078464A1 (en) 2020-10-16 2021-10-14 Method and apparatus for human-machine interaction based on panoramic surround view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011109986.8A CN112389459B (en) 2020-10-16 2020-10-16 Man-machine interaction method and device based on panoramic looking-around

Publications (2)

Publication Number Publication Date
CN112389459A true CN112389459A (en) 2021-02-23
CN112389459B CN112389459B (en) 2022-04-12

Family

ID=74595923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011109986.8A Active CN112389459B (en) 2020-10-16 2020-10-16 Man-machine interaction method and device based on panoramic looking-around

Country Status (2)

Country Link
CN (1) CN112389459B (en)
WO (1) WO2022078464A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022078464A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Method and apparatus for human-machine interaction based on panoramic surround view

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120191339A1 (en) * 2011-01-24 2012-07-26 Hon Hai Precision Industry Co., Ltd. Portable electronic device and panorama navigation method using the portable electronic device
CN102622907A (en) * 2011-01-28 2012-08-01 财团法人工业技术研究院 Driving assistance method and driving assistance system for electric vehicle
CN102910166A (en) * 2011-08-04 2013-02-06 日产自动车株式会社 Parking assistance device and parking assistance method
CN107792179A (en) * 2017-09-27 2018-03-13 浙江零跑科技有限公司 A kind of parking guidance method based on vehicle-mounted viewing system
CN108216215A (en) * 2018-01-10 2018-06-29 维森软件技术(上海)有限公司 A kind of method for assisting in parking
CN108647638A (en) * 2018-05-09 2018-10-12 东软集团股份有限公司 A kind of vehicle location detection method and device
US20180376121A1 (en) * 2017-06-22 2018-12-27 Acer Incorporated Method and electronic device for displaying panoramic image
CN110962844A (en) * 2019-10-28 2020-04-07 纵目科技(上海)股份有限公司 Vehicle course angle correction method and system, storage medium and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10266023B2 (en) * 2017-05-01 2019-04-23 Ford Global Technologies, Llc System to automate hitching a trailer
US11067993B2 (en) * 2017-08-25 2021-07-20 Magna Electronics Inc. Vehicle and trailer maneuver assist system
CN112389459B (en) * 2020-10-16 2022-04-12 爱驰汽车(上海)有限公司 Man-machine interaction method and device based on panoramic looking-around

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120191339A1 (en) * 2011-01-24 2012-07-26 Hon Hai Precision Industry Co., Ltd. Portable electronic device and panorama navigation method using the portable electronic device
CN102622907A (en) * 2011-01-28 2012-08-01 财团法人工业技术研究院 Driving assistance method and driving assistance system for electric vehicle
CN102910166A (en) * 2011-08-04 2013-02-06 日产自动车株式会社 Parking assistance device and parking assistance method
US20180376121A1 (en) * 2017-06-22 2018-12-27 Acer Incorporated Method and electronic device for displaying panoramic image
CN107792179A (en) * 2017-09-27 2018-03-13 浙江零跑科技有限公司 A kind of parking guidance method based on vehicle-mounted viewing system
CN108216215A (en) * 2018-01-10 2018-06-29 维森软件技术(上海)有限公司 A kind of method for assisting in parking
CN108647638A (en) * 2018-05-09 2018-10-12 东软集团股份有限公司 A kind of vehicle location detection method and device
CN110962844A (en) * 2019-10-28 2020-04-07 纵目科技(上海)股份有限公司 Vehicle course angle correction method and system, storage medium and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022078464A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Method and apparatus for human-machine interaction based on panoramic surround view

Also Published As

Publication number Publication date
WO2022078464A1 (en) 2022-04-21
CN112389459B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN113554698B (en) Vehicle pose information generation method and device, electronic equipment and storage medium
CN110341597B (en) Vehicle-mounted panoramic video display system and method and vehicle-mounted controller
JP6675448B2 (en) Vehicle position detecting method and device
WO2021098254A1 (en) Automatic parking interaction method and device
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
US20220392108A1 (en) Camera-only-localization in sparse 3d mapped environments
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
CN109733284B (en) Safe parking auxiliary early warning method and system applied to vehicle
WO2009119110A1 (en) Blind spot display device
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
CN110758243A (en) Method and system for displaying surrounding environment in vehicle driving process
CN114913506A (en) 3D target detection method and device based on multi-view fusion
US11562576B2 (en) Dynamic adjustment of augmented reality image
CN115023736A (en) Method for measuring environmental topography
CN112389459B (en) Man-machine interaction method and device based on panoramic looking-around
Yeh et al. Driver assistance system providing an intuitive perspective view of vehicle surrounding
Weber et al. Approach for improved development of advanced driver assistance systems for future smart mobility concepts
CN113065999B (en) Vehicle-mounted panorama generation method and device, image processing equipment and storage medium
Yuan et al. A lightweight augmented reality system to see-through cars
CN110077320B (en) Reversing method and device based on radar
CN114120260A (en) Method and system for identifying travelable area, computer device, and storage medium
CN112698717B (en) Local image processing method and device, vehicle-mounted system and storage medium
CN113177502B (en) Method and device for detecting looking-around obstacle, medium, vehicle-mounted system and vehicle
CN116142172A (en) Parking method and device based on voxel coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right

Effective date of registration: 20240428

Granted publication date: 20220412