CN113722043A - Scene display method and device for AVP, electronic equipment and storage medium - Google Patents

Scene display method and device for AVP, electronic equipment and storage medium Download PDF

Info

Publication number
CN113722043A
CN113722043A CN202111095606.4A CN202111095606A CN113722043A CN 113722043 A CN113722043 A CN 113722043A CN 202111095606 A CN202111095606 A CN 202111095606A CN 113722043 A CN113722043 A CN 113722043A
Authority
CN
China
Prior art keywords
model
vehicle
mode
parking
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111095606.4A
Other languages
Chinese (zh)
Inventor
朱彦劼
张智盛
马星
张晓亮
张武
芦笛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xianta Intelligent Technology Co Ltd
Original Assignee
Shanghai Xianta Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xianta Intelligent Technology Co Ltd filed Critical Shanghai Xianta Intelligent Technology Co Ltd
Priority to CN202111095606.4A priority Critical patent/CN113722043A/en
Publication of CN113722043A publication Critical patent/CN113722043A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a scene display method, a scene display device, electronic equipment and a storage medium for AVP, wherein the scene display method for AVP comprises the following steps: displaying a virtual scene in a parking interactive interface, wherein the virtual scene is configured to be capable of displaying a map model and a vehicle model, and the position of the vehicle model in the map model is matched with the real position of the current vehicle when the current vehicle is parked; if the current display mode is the first mode, then: when the virtual scene is displayed, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the specified adjustment operation of a user on the virtual scene to change the display content in the parking interaction interface; if the current display mode is the second mode, then: and when the virtual scene is displayed, displaying the map model and the vehicle model at a visual angle inclined to the ground of the map model, and enabling the camera shooting source to move along with the vehicle model all the time.

Description

Scene display method and device for AVP, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicles, and in particular, to a method and an apparatus for displaying a scene in an AVP, an electronic device, and a storage medium.
Background
AVP (automated Valet parking) can be understood as automatic Valet parking, and the whole process can not require a client to carry out operations such as vehicle control in the vehicle.
In the existing AVP processing process, a user can only trigger the start of the AVP and cannot effectively learn the parking progress of the AVP.
Disclosure of Invention
The invention provides a scene display method and device for AVP, electronic equipment and a storage medium, which are used for solving the problem that the parking progress of the AVP cannot be effectively known.
According to a first aspect of the present invention, there is provided a scene display method for AVP, comprising:
displaying a virtual scene in a parking interaction interface of a mobile terminal during automatic parking of a current vehicle, wherein the virtual scene is configured to be capable of displaying a map model of the current parking lot and a vehicle model of the current vehicle, and the position of the vehicle model in the map model is matched with the real position of the current vehicle when the current vehicle is parked; the mobile terminal is not a vehicle machine;
determining a current display mode;
if the current display mode is the first mode, then: when the mobile terminal displays the virtual scene, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the appointed adjustment operation of a user on the virtual scene to change the display content in the parking interactive interface;
if the current display mode is the second mode, then: when the mobile terminal displays the virtual scene, the map model and the vehicle model are displayed at a view angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
Optionally, the method further includes:
if the current display mode is the first mode, then:
when the specified adjustment operation is not responded and after the specified time length after the specified adjustment operation is finished, the virtual scene is made to change along with the position change of the vehicle model, so that the vehicle model is always located in the specified area of the parking interaction interface;
and when the specified adjustment operation is responded, and within a specified time length after the specified adjustment operation is finished, the virtual scene is enabled not to change along with the position change of the vehicle model any more.
Optionally, the changing the display content in the parking interaction interface in response to the user's designated adjustment operation on the virtual scene includes at least one of:
responding to a first specified adjustment operation of a user, and adaptively changing a display area of the map model in the parking interaction interface;
responding to a second specified adjustment operation of the user, and adaptively enlarging the display range of the map model and the display size of the vehicle model in the parking interaction interface;
responding to a third appointed adjustment operation of a user, and adaptively reducing the display range of the map model and the display size of the vehicle model in the parking interaction interface;
and responding to a fourth specified adjustment operation of the user, and adaptively rotating the camera source for acquiring the virtual scene.
Optionally, the first specified adjustment operation includes: a drag operation for the map model;
the second specified adjustment operation includes: forming two contacts in the parking interaction interface, and increasing the distance between the two contacts;
the third specified adjustment operation comprises: forming two contacts in the parking interaction interface, and reducing the distance between the two contacts;
the fourth specified adjustment operation comprises: and forming two contacts in the parking interaction interface, and enabling the two contacts to perform the operation of rotating in the same direction.
Optionally, in the first mode, the map model and the vehicle model are displayed in a 2D shape.
Optionally, in the second mode, the map model and the vehicle model are displayed in a 3D shape.
Optionally, in the second mode, the camera source is located above and behind the vehicle model, the camera source faces the vehicle model, and a relative position of the camera source with respect to the vehicle model remains unchanged.
According to a second aspect of the present invention, there is provided a scene display apparatus for AVP, comprising:
the scene display module is used for displaying a virtual scene in a parking interaction interface in the process of automatic parking of a current vehicle, wherein the virtual scene is configured to be capable of displaying a map model of the current parking lot and a vehicle model of the current vehicle, and the position of the vehicle model in the map model is matched with the real position of the current vehicle when the current vehicle is parked;
the mode selection module is used for responding to mode selection operation of a user in the parking interaction interface and determining a current display mode;
a first mode processing module to: if the current display mode is the first mode, then: when the virtual scene is displayed, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the specified adjustment operation of a user on the virtual scene to change the display content in the parking interaction interface;
a second mode processing module to: if the current display mode is the second mode, then: when the virtual scene is displayed, the map model and the vehicle model are displayed at a visual angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
According to a third aspect of the present invention there is provided a storage medium having a program stored thereon, characterized in that the program, when executed by a processor, carries out the steps of the method as designed by the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to the first aspect and its alternatives when executing the program.
In the scene display method, the scene display device, the electronic equipment and the storage medium for AVP provided by the invention, the virtual scene is displayed in the parking process, the map model of the parking lot and the vehicle model can be displayed in the virtual scene, and further, the real position of the current vehicle in the parking process can be represented by utilizing the position of the vehicle model in the map model, so that the parking progress can be accurately and timely fed back to a user through the virtual scene, in the fed-back information, not only the position of the vehicle is reflected, but also the related information of the environment in the parking lot can be reflected, the user can conveniently and accurately learn the parking environment of the vehicle, and the vehicle can be more accurately and clearly recognized whether to be safe.
Further, in the present invention, the virtual scene is configured with two display modes, i.e., a first mode in which the display contents can be changed in response to a specified adjustment operation,
under the second mode, follow-up display of the vehicle model can be achieved in a targeted mode, different observation requirements can be met, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart illustrating a scene display method for AVP according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a scene display method for AVP according to an embodiment of the present invention;
FIG. 3 is a third flowchart illustrating a scene display method for AVP according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a display screen in a full view mode according to an embodiment of the present invention;
FIGS. 5 a-5 d are four schematic diagrams illustrating the operation of the full-view mode according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a display screen in a follow mode according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the relative positions of the vehicle model and the camera source in the following mode according to an embodiment of the invention;
FIG. 8 is a block diagram of a scene display apparatus for AVP according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The scene display method for the AVP provided by the embodiment of the present invention may be applied to a terminal, which may be a terminal of a user, specifically, the terminal may be a vehicle-mounted terminal (i.e., a car machine), a mobile terminal (e.g., a mobile phone, a tablet computer, etc.), or may be a server. In an example, the scene display method for AVP according to the embodiment of the present invention may be applied to a mobile terminal (e.g., a mobile phone), and can be implemented by an APP installed in the mobile terminal.
Referring to fig. 1, a scene display method for AVP includes:
s101: displaying a virtual scene in a parking interactive interface in the process of automatically parking the current vehicle;
wherein the virtual scene is configured to be able to display a map model of the current parking lot and a vehicle model of the current vehicle, and a position of the vehicle model in the map model matches a real position of the current vehicle when parked; the parking interaction interface is a parking interaction interface of a mobile terminal (such as a mobile phone, a tablet personal computer and the like); the mobile terminal is not a vehicle machine;
s102: determining a current display mode;
s103: whether the current display mode is a first mode;
if the determination result of step S103 is yes, the following steps S104 and S105 may be performed;
s104: displaying the map model and the vehicle model at a viewing angle perpendicular to the ground of the map model while displaying the virtual scene;
s105: responding to the appointed adjustment operation of the user on the virtual scene, and changing the display content in the parking interaction interface;
s106: whether the current display mode is a second mode;
if the determination result in step S106 is yes, step S107 may be implemented: when the virtual scene is displayed, the map model and the vehicle model are displayed at a visual angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
The specific scheme of step S102 may be, for example: in response to a mode selection operation of a user in the parking interaction interface, determining a current display mode, for example: upon starting automatic parking, determining that a current display mode is the first mode or the second mode.
The display of the map model and the vehicle model is displayed on the mobile terminal;
wherein the virtual scene is configured to display a map model of the current parking lot (possibly displaying all the map models, and possibly displaying a part of the map models), and a vehicle model of the current vehicle (usually displaying all the vehicle models, and possibly displaying a part of the vehicle models); further, it is not excluded that the vehicle model is not displayed on the display screen of the virtual scene in response to a partial operation of the terminal (for example, a mobile phone).
Wherein the location of the vehicle model in the map model matches the true location of the current vehicle when parked. Specifically, in a terminal (for example, a mobile phone), a virtual three-dimensional coordinate system having a map model and a vehicle model may be formed, and furthermore, a view may be taken of a part or all of the map model and the vehicle model in the three-dimensional coordinate system based on a camera source (or a designated angle of view), and further, a virtual scene displayed in a display interface may be formed.
Therefore, the position of the vehicle model in the map model can be adaptively changed along with the change of the real position, and the real position and the change of the current vehicle can be represented by the position, so that the parking progress can be accurately and timely fed back to the user through the virtual scene, and the fed back information not only reflects the position of the vehicle, but also reflects the relevant information of the environment in the parking lot, so that the user can accurately learn the parking environment of the vehicle, and the safety of the vehicle can be more accurately and clearly known.
In addition, in the embodiment of the present invention, the virtual scene is configured with two display modes, i.e., a first mode and a second mode, in the first mode, the display content can be changed in response to the designated adjustment operation,
under the second mode, follow-up display of the vehicle model can be achieved in a targeted mode, different observation requirements can be met, and user experience is improved.
Specifically, in the first mode, the user can conveniently adjust and change the observed and displayed content, the requirement of the user for comprehensively knowing the environment and the parking route of each place of the parking lot is met, and in the second mode, the user can conveniently observe the requirement of the vehicle and the environment nearby the vehicle in a targeted manner.
In one embodiment, the real position may be determined based on the adaptation of the vehicle detection information to the map data, or may be estimated based on the vehicle detection information, for example, may be estimated based on information acquired by a sensor in the vehicle (e.g., GPS, an acceleration sensor, an image sensor, radar, and a positioning method implemented based on a communication method such as infrared and bluetooth), may be estimated by combining a historical position, or may be measured based on a camera (or other detection device) in the parking lot. In any way, the method can be used as an alternative of the embodiment of the invention as long as the real position can be acquired and the position of the vehicle model in the map model can represent the real position.
In one embodiment, please refer to fig. 2, the method further includes:
if the current display mode is the first mode, then: when the virtual scene is displayed, the method further comprises the following steps:
s108: when the specified adjustment operation is not responded and after the specified time length after the specified adjustment operation is finished, the virtual scene is made to change along with the position change of the vehicle model, so that the vehicle model is always located in the specified area of the parking interaction interface;
s109: and when the specified adjustment operation is responded, and within a specified time length after the specified adjustment operation is finished, the virtual scene is enabled not to change along with the position change of the vehicle model any more.
The specified time duration may be any pre-specified time duration, for example, several seconds, and in some examples, the specified time duration may also be variable, for example: the corresponding specified duration is determined based on the type of the specified adjustment operation, and it is further guaranteed that the specified duration is adaptable to the adjustment operation, for example: the specified duration corresponding to the first specified adjustment operation may be longer (or shorter) than the specified duration corresponding to the second and third specified adjustment operations.
The designated area may be any area in the parking interaction interface, for example, a central area of the parking interaction interface, and in other examples, the designated area may not be limited to the central area.
If the display state of the parking interaction interface before entering the first mode and not responding to the specified adjustment operation is taken as the default state, then: the above process can also be understood as:
when the specified adjustment operation is responded, the parking interaction interface can exit the default state and enter other states, after the specified adjustment operation is finished, the specified time length can be waited, during the waiting period, the parking interaction interface is still in other states (namely the non-default state), and after the specified time length, the parking interaction interface can be recovered to the default state.
In one embodiment, please refer to fig. 4 and fig. 6, wherein the first mode can be characterized as a full view mode, and the second mode can be characterized as a following mode, and further, the clicking of the full view button in the parking interaction interface and the clicking of the following button in the parking interaction interface can both be understood as mode selection operations;
if the 'overview' button is clicked, the first mode is determined to be entered, the parking interaction interface is displayed in the first mode, and at the moment: if the parking interactive interface is in the first mode originally, the parking interactive interface can be kept in the first mode after being clicked, and meanwhile, if the parking interactive interface is not in the default state, the parking interactive interface can enter the default state (namely the virtual scene is enabled to change along with the position change of the vehicle model so that the vehicle model is always in the designated area of the parking interactive interface); if the parking interactive interface is originally in the second mode, after clicking, the parking interactive interface can be changed from the second mode to the first mode and enters a default state;
if the 'follow' button is clicked, determining to enter a second mode, displaying the parking interaction interface as the second mode, and if the parking interaction interface is originally in the first mode, changing the parking interaction interface from the first mode to the second mode after clicking; if the parking interactive interface is originally in the second mode, the parking interactive interface can be kept in the second mode after clicking.
In one embodiment, in the first mode, the map model and the vehicle model are displayed in a 2D shape. In the second mode, the map model and the vehicle model are displayed in a 3D shape.
Specifically, if a plane parallel to the ground of the map model is taken as an XY plane formed by the X axis and the Z axis in the virtual coordinate system, and an axis perpendicular to the ground is taken as the Y axis, then: the 2D shape is understood to be a 2D shape formed by projecting part or all of the contents of the map model onto the XY plane, and the 3D shape is a shape in the XYZ three-dimensional space.
For the map model, the forming manner may include (i.e., the method may further include), for example:
determining a plurality of entities to be simulated based on the map data of the current parking lot, the vehicle detection information, and the real position;
in a model database, calling a virtual model for simulating the entity to be simulated;
updating the content of the map model displayed in the virtual scene based on the retrieved virtual model.
The vehicle detection information is detected by an environment detection section on the current vehicle; the environment detection unit may be any information for detecting an external environment in the vehicle, and may be, for example, an image acquisition unit, an infrared sensor, an ultrasonic sensor, a bluetooth communication module, a radio frequency communication module, or the like.
The plurality of entities to be simulated comprises: a real entity present in the vicinity of the real location; wherein, near the real position, the distance between the finger and the real position is less than the set threshold value; the real entity may be, for example, at least one of a pole, wall, barrier, decoration, warning board, sign, security room, elevator door, billboard, other vehicle, person, animal, other object, organism, etc. In some examples, real entities that do not exist near the real location may also be included.
Correspondingly, in the model database, a virtual model can be pre-established for each real entity, then, when the map model needs to be updated, the entity to be simulated is determined, then the virtual model of the entity to be simulated is called and combined into the map model, or the virtual model is added into or replaced into the map model, so that the update of the map model is realized. Taking fig. 4 and 6 as an example, a virtual model 202 of a pillar, a virtual model 203 of a road line, and the like may be formed in the map model in addition to the vehicle model 201.
Part of the entities to be simulated may be determined because they are described in the map data, for example, when the vehicle arrives at a certain position, the entities to be simulated nearby may be found based on the map data, and then may be verified by the vehicle detection information to determine whether the corresponding entities to be simulated really exist (or may not be verified, and is directly determined as the entities to be simulated), and another part of the entities to be simulated may not be described in the map data, and at this time, the types of the detected entities may be identified based on the detection result of the vehicle detection information (for example, whether there is a person or a vehicle may be determined based on the image thereof), and then may be determined as the entities to be simulated.
Furthermore, if part of the entities to be simulated are determined as described in the map data, the entities to be simulated may also be determined before the vehicle reaches the corresponding location (and the virtual model may also be retrieved and added to the virtual space before the vehicle reaches the corresponding location).
In the scheme, the entities near the vehicle can be accurately simulated and embodied based on the map data and the vehicle detection information, so that the user can be helped to accurately and comprehensively know the real process and environment of vehicle parking.
In addition, on the basis that the user accurately and comprehensively knows the real parking process and environment, the parking process and the safety of the vehicle can be more clearly known, and a basis can be provided for behaviors such as whether parking is intervened, whether parking is called in time and the like.
However, the map model may not be established or updated by using the above scheme, and in an example, various virtual models in the map model may be established in advance according to requirements and requirements, and in another example, a general model of a cone (or other shapes) may be used for representing (i.e., as a general virtual model for simulating an obstacle) a part or all of the obstacles (e.g., the obstacles identified based on the vehicle detection information), and further, the size of the general model displayed in the virtual scene may be adjusted by referring to the actual size of the obstacle, for example, when the actual size is large, the size of the corresponding general model in the virtual scene is also large, and no matter how to form the map model and the vehicle model, the scope of the embodiment of the present invention is not deviated.
In a specific implementation process, referring to fig. 3, step S15 may include at least one of the following steps:
s151: responding to a first specified adjustment operation of a user, and adaptively changing a display area of the map model in the parking interaction interface;
s152: responding to a second specified adjustment operation of the user, and adaptively enlarging the display range of the map model and the display size of the vehicle model in the parking interaction interface;
s153: responding to a third appointed adjustment operation of a user, and adaptively reducing the display range of the map model and the display size of the vehicle model in the parking interaction interface;
s154: and responding to a fourth specified adjustment operation of the user, and adaptively rotating the camera source for acquiring the virtual scene.
In some examples, only one, two or three of the above steps S151, S152, S153 and S154 may be formed, and in other examples, all the steps of the above steps S151, S152, S153 and S154 may be formed at the same time.
The first mode can also be understood as a full view mode, taking the example shown in fig. 4.
In one example of the full view mode, by default, the map may be oriented to true north, the camera source is focused on the vehicle model, and the vehicle model is centered on the screen at a specified magnification, and then the camera source moves as the vehicle moves.
With respect to step S151:
referring to fig. 5a, the first designated adjustment operation includes: a drag operation for the map model; this drag operation may be specifically understood as: one or more touch points are formed in a parking interactive interface (for example, a display area of a map model) (and the touch points do not realize clicking on other controls), then the touch points move (if a plurality of touch points are formed, the plurality of touch points move synchronously), taking fig. 5a as an example, the number of the touch points is one, and the touch points move up and down and left and right (also not limited to up and down, left and right, and the moving mode and direction can be any), so that the map model can be dragged.
Specifically, in the full view mode, the map model can be dragged and checked at will, after dragging, the camera source does not move along with the vehicle model any more, and when dragging, the map edge can be designed not to exceed the center point of the picture (or the map can not be less than 50% of the screen, and the definition mode can be changed at will according to the requirements); in this operation, the change of the position of the map model displayed in the display interface is realized.
With respect to step S152:
referring to fig. 5b, the second designated adjustment operation includes: forming two contacts in the parking interaction interface, and increasing the distance between the two contacts; meanwhile, the moving direction of the contact can be arbitrary;
specifically, in the full view mode, the touch operation can be performed by two fingers, the center point of the two fingers is a zoom-out center point, and the zoom-in of the screen (i.e., the zoom-in of the displayed map model and the displayed vehicle model) can be realized along with the increase of the distance between the two fingers. After the vehicle model is magnified, the camera shooting source does not move along with the vehicle model any more, the magnification limit (the map is represented as zoom-in) is a specified magnification factor, and the camera shooting source can be configured at will according to the requirements; in this operation, even if the range of the map model displayed on the display interface is reduced, it can be understood that the distance between the imaging source and the ground is reduced.
With respect to step S153:
referring to fig. 5c, the third designated adjustment operation includes: forming two contacts in the parking interaction interface, and reducing the distance between the two contacts; meanwhile, the moving direction of the contact can be arbitrary;
specifically, in the full view mode, the touch operation can be performed by two fingers, the center point of the two fingers is a zoom-out center point, and the zoom-out of the screen (i.e., the zoom-in of the displayed map model and the displayed vehicle model) can be realized along with the decrease of the distance between the two fingers. After zooming out, the camera shooting source does not move along with the vehicle model any more, the zooming out limit (the map is expressed as zooming out) is another specified magnification, and the camera shooting source can be configured at will according to the requirement; in this operation, even if the range of the map model displayed on the display interface is enlarged, the distance between the imaging source and the ground can be understood to be increased.
With respect to step S154:
the fourth specified adjustment operation comprises: and forming two contacts in the parking interaction interface, and enabling the two contacts to perform the operation of rotating in the same direction. Meanwhile, the actual position, the rotational speed direction, and the like of the contact may be arbitrary. In a specific example, the direction of the contact rotation motion is adapted to the camera source rotation direction, that is: when the two contacts rotate clockwise, the camera source also rotates clockwise, and when the two contacts rotate anticlockwise, the camera source also rotates anticlockwise, and in addition, the axis of rotation of the camera source can be the axis of a view field when the camera source collects a virtual scene, and can also be understood as a straight line where a connecting line of the center of the camera source and the center of the vehicle model is located;
specifically, in the full view mode, the operation of touch control can be performed by two fingers, and the rotation of the picture (i.e., the rotation of the camera source) can be realized by rotating the two fingers by using the center point of the two fingers as the rotation center point. Further, the rotation may be configured such that the zooming-in and zooming-out cannot be realized, and after the rotation, the camera source does not follow the vehicle, and further, the rotation may be 360 degrees around the axis (for example, the axis of the camera source).
In one embodiment, in the second mode, the camera source is located at the rear upper part of the vehicle model, the camera source faces the vehicle model, and the relative position of the camera source with respect to the vehicle model is kept unchanged.
Taking fig. 7 as an example, in order to keep the relative position unchanged, the relative height y between the camera source 204 and the central point of the vehicle model 201 and the relative distance X in the X-axis direction may be preset, and then the value may be kept unchanged during the movement of the vehicle model. Meanwhile, the focal length of the camera source can be kept constant at a certain value or within a certain specified focal length range.
In a specific example, the parking interaction interface may automatically enter the second mode after the automatic parking is initiated and the vehicle begins to move.
The above change between the first mode and the second mode, the default state and the non-default state (for example, the state after zooming in, zooming out, and rotating by the designated adjustment operation) may be dynamically changed, and the dynamic change may be understood as: the change of the virtual scene caused by the change of the camera shooting source can be reflected. For example: in the process of changing from the non-default state to the default state, the map model undergoes transitional changes of rotation, movement, zooming-in/zooming-out.
In addition, whether in the first mode or the second mode, a scale can be displayed in the parking interaction interface; the scale length and scale division can be configured as described below:
aiming at the scale marks, various scale marks can be configured in advance, and when the actual unit distance exceeds the scale distance (namely the distance between two adjacent scales), the scale marks can be displayed (for example, the actual unit distance is 3.4 meters, and the scale marks of 1 meter can be displayed);
aiming at the length of the scale, the scale can be drawn according to the scale proportion of the actual magnification;
when the actual multiple changes (for example, the scale is enlarged, reduced, and the default state is restored), the scale can be hidden, and the scale and the length are calculated after the change is finished, so that the scale is displayed again.
In addition, the position of the scale can be arranged in a fixed position and an area range of the parking interaction interface and does not change along with the change of the virtual scene.
Referring to fig. 8, an embodiment of the invention provides a scene display apparatus 300 for AVP, including:
the scene display module 301 is configured to display a virtual scene in a parking interaction interface during automatic parking of a current vehicle, where the virtual scene is configured to be capable of displaying a map model of the current parking lot and a vehicle model of the current vehicle, and a position of the vehicle model in the map model matches a real position of the current vehicle when the current vehicle is parked;
the mode selection module 302 is used for responding to mode selection operation of a user in the parking interaction interface and determining a current display mode;
a first mode processing module 303 configured to: if the current display mode is the first mode, then: when the virtual scene is displayed, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the specified adjustment operation of a user on the virtual scene to change the display content in the parking interaction interface;
a second mode processing module 304 for: if the current display mode is the second mode, then: when the virtual scene is displayed, the map model and the vehicle model are displayed at a visual angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
Optionally, the first mode processing module 303 is further configured to:
if the current display mode is the first mode, then:
when the specified adjustment operation is not responded and after the specified time length after the specified adjustment operation is finished, the virtual scene is made to change along with the position change of the vehicle model, so that the vehicle model is always located in the specified area of the parking interaction interface;
and when the specified adjustment operation is responded, and within a specified time length after the specified adjustment operation is finished, the virtual scene is enabled not to change along with the position change of the vehicle model any more.
Optionally, the first mode processing module 303 is specifically configured to execute at least one of the following:
responding to a first specified adjustment operation of a user, and adaptively changing a display area of the map model in the parking interaction interface;
responding to a second specified adjustment operation of the user, and adaptively enlarging the display range of the map model and the display size of the vehicle model in the parking interaction interface;
responding to a third appointed adjustment operation of a user, and adaptively reducing the display range of the map model and the display size of the vehicle model in the parking interaction interface;
and responding to a fourth specified adjustment operation of the user, and adaptively rotating the camera source for acquiring the virtual scene.
Optionally, the first specified adjustment operation includes: a drag operation for the map model;
the second specified adjustment operation includes: forming two contacts in the parking interaction interface, and increasing the distance between the two contacts;
the third specified adjustment operation comprises: forming two contacts in the parking interaction interface, and reducing the distance between the two contacts;
the fourth specified adjustment operation comprises: and forming two contacts in the parking interaction interface, and enabling the two contacts to perform the operation of rotating in the same direction.
Optionally, in the first mode, the map model and the vehicle model are displayed in a 2D shape.
Optionally, in the second mode, the map model and the vehicle model are displayed in a 3D shape.
Optionally, in the second mode, the camera source is located above and behind the vehicle model, the camera source faces the vehicle model, and a relative position of the camera source with respect to the vehicle model remains unchanged.
Referring to fig. 9, an electronic device 40 is provided, including:
a processor 41; and the number of the first and second groups,
a memory 42 for storing executable instructions of the processor;
wherein the processor 41 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 41 is capable of communicating with the memory 42 via the bus 43.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A scene display method for AVP, comprising:
displaying a virtual scene in a parking interaction interface of a mobile terminal during automatic parking of a current vehicle, wherein the virtual scene is configured to be capable of displaying a map model of the current parking lot and a vehicle model of the current vehicle, and the position of the vehicle model in the map model is matched with the real position of the current vehicle when the current vehicle is parked; the mobile terminal is not a vehicle machine;
determining a current display mode;
if the current display mode is the first mode, then: when the mobile terminal displays the virtual scene, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the appointed adjustment operation of a user on the virtual scene to change the display content in the parking interactive interface;
if the current display mode is the second mode, then: when the mobile terminal displays the virtual scene, the map model and the vehicle model are displayed at a view angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
2. The method of claim 1, further comprising:
if the current display mode is the first mode, then:
when the specified adjustment operation is not responded and after the specified time length after the specified adjustment operation is finished, the virtual scene is made to change along with the position change of the vehicle model, so that the vehicle model is always located in the specified area of the parking interaction interface;
and when the specified adjustment operation is responded, and within a specified time length after the specified adjustment operation is finished, the virtual scene is enabled not to change along with the position change of the vehicle model any more.
3. The method of claim 1,
the display content in the parking interaction interface is changed in response to the specified adjustment operation of the virtual scene by the user, and the change comprises at least one of the following:
responding to a first specified adjustment operation of a user, and adaptively changing a display area of the map model in the parking interaction interface;
responding to a second specified adjustment operation of the user, and adaptively enlarging the display range of the map model and the display size of the vehicle model in the parking interaction interface;
responding to a third appointed adjustment operation of a user, and adaptively reducing the display range of the map model and the display size of the vehicle model in the parking interaction interface;
and responding to a fourth specified adjustment operation of the user, and adaptively rotating the camera source for acquiring the virtual scene.
4. The method of claim 3,
the first specified adjustment operation comprises: a drag operation for the map model;
the second specified adjustment operation includes: forming two contacts in the parking interaction interface, and increasing the distance between the two contacts;
the third specified adjustment operation comprises: forming two contacts in the parking interaction interface, and reducing the distance between the two contacts;
the fourth specified adjustment operation comprises: and forming two contacts in the parking interaction interface, and enabling the two contacts to perform the operation of rotating in the same direction.
5. The method according to any one of claims 1 to 4, wherein in the first mode, the map model and the vehicle model are displayed in a 2D shape.
6. The method according to any one of claims 1 to 4, wherein in the second mode, the map model and the vehicle model are displayed in a 3D shape.
7. The method according to any one of claims 1 to 4, wherein in the second mode, the camera source is located at the rear upper part of the vehicle model, the camera source is oriented towards the vehicle model, and the relative position of the camera source with respect to the vehicle model is kept unchanged.
8. A scene display apparatus for AVP, comprising:
the system comprises a scene display module, a parking interaction interface and a vehicle model display module, wherein the scene display module is used for displaying a virtual scene in the parking interaction interface of the mobile terminal in the process of automatic parking of a current vehicle, the virtual scene is configured to be capable of displaying a map model of the current parking lot and the vehicle model of the current vehicle, and the position of the vehicle model in the map model is matched with the real position of the current vehicle when the current vehicle is parked;
the mode selection module is used for determining the current display mode;
a first mode processing module to: if the current display mode is the first mode, then: when the mobile terminal displays the virtual scene, displaying the map model and the vehicle model at a visual angle perpendicular to the ground of the map model, and responding to the appointed adjustment operation of a user on the virtual scene to change the display content in the parking interactive interface;
a second mode processing module to: if the current display mode is the second mode, then: when the mobile terminal displays the virtual scene, the map model and the vehicle model are displayed at a view angle inclined to the ground of the map model, and a camera source is made to move along with the vehicle model all the time, wherein the camera source is used for collecting the virtual scene.
9. A storage medium having a program stored thereon, wherein the program, when executed by a processor, performs the steps of the method of any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and running on the processor, wherein the steps of the method of any of claims 1-7 are implemented when the program is executed by the processor.
CN202111095606.4A 2021-09-17 2021-09-17 Scene display method and device for AVP, electronic equipment and storage medium Withdrawn CN113722043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111095606.4A CN113722043A (en) 2021-09-17 2021-09-17 Scene display method and device for AVP, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111095606.4A CN113722043A (en) 2021-09-17 2021-09-17 Scene display method and device for AVP, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113722043A true CN113722043A (en) 2021-11-30

Family

ID=78684545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111095606.4A Withdrawn CN113722043A (en) 2021-09-17 2021-09-17 Scene display method and device for AVP, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113722043A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895814A (en) * 2022-06-16 2022-08-12 广州小鹏汽车科技有限公司 Interaction method of vehicle-mounted system, vehicle and storage medium
CN115376360A (en) * 2022-08-10 2022-11-22 小米汽车科技有限公司 Parking and information processing method, apparatus, device, medium, and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895814A (en) * 2022-06-16 2022-08-12 广州小鹏汽车科技有限公司 Interaction method of vehicle-mounted system, vehicle and storage medium
CN115376360A (en) * 2022-08-10 2022-11-22 小米汽车科技有限公司 Parking and information processing method, apparatus, device, medium, and program product

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
EP3400420B1 (en) Interactive map informational lens
EP3317691B1 (en) System and method for laser depth map sampling
EP2481637B1 (en) Parking Assistance System and Method
WO2014020663A1 (en) Map display device
JP7436655B2 (en) Vehicle parking management method, electronic device, and computer storage medium
US20070073475A1 (en) Navigation apparatus and map display device
EP3327676B1 (en) Method for displaying object on three-dimensional model
CN113722043A (en) Scene display method and device for AVP, electronic equipment and storage medium
CN108594843A (en) Unmanned plane autonomous flight method, apparatus and unmanned plane
CN112146649A (en) Navigation method and device in AR scene, computer equipment and storage medium
CN111324945B (en) Sensor scheme determining method, device, equipment and storage medium
US11836864B2 (en) Method for operating a display device in a motor vehicle
CN109213363B (en) System and method for predicting pointer touch position or determining pointing in 3D space
CN112950790A (en) Route navigation method, device, electronic equipment and storage medium
CN104599310A (en) Three-dimensional scene cartoon recording method and device
CN116420096A (en) Method and system for marking LIDAR point cloud data
CN113778294A (en) Processing method, device, equipment and medium for AVP (Audio video Standard) interactive interface
CN114765972A (en) Display method, computer program, controller and vehicle for representing a model of the surroundings of a vehicle
CN112907757A (en) Navigation prompting method and device, electronic equipment and storage medium
CN116363082A (en) Collision detection method, device, equipment and program product for map elements
JP2009276266A (en) Navigation device
CN113781663A (en) PAVP processing method and device, electronic device and storage medium
EP4194883A1 (en) Device and method for determining objects around a vehicle
EP3767436A1 (en) Systems and methods of augmented reality visualization based on sensor data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211130