CN116929400A - Navigation route generation method, terminal, electronic device and readable storage medium - Google Patents

Navigation route generation method, terminal, electronic device and readable storage medium Download PDF

Info

Publication number
CN116929400A
CN116929400A CN202311038053.8A CN202311038053A CN116929400A CN 116929400 A CN116929400 A CN 116929400A CN 202311038053 A CN202311038053 A CN 202311038053A CN 116929400 A CN116929400 A CN 116929400A
Authority
CN
China
Prior art keywords
image
input
user
navigation
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311038053.8A
Other languages
Chinese (zh)
Inventor
李小明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311038053.8A priority Critical patent/CN116929400A/en
Publication of CN116929400A publication Critical patent/CN116929400A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3438Rendez-vous, i.e. searching a destination where several users can meet, and the routes to this destination for these users; Ride sharing, i.e. searching a route such that at least two users can share a vehicle for at least part of the route
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3614Destination input or retrieval through interaction with a road map, e.g. selecting a POI icon on a road map

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The application discloses a navigation route generation method, a terminal, electronic equipment and a readable storage medium, and belongs to the technical field of electronics. The method comprises the following steps: receiving a first input of a user while displaying at least one first image; generating a navigation route according to at least one second image in the at least one first image in response to the first input; wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.

Description

Navigation route generation method, terminal, electronic device and readable storage medium
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a navigation route generation method, a terminal, electronic equipment and a readable storage medium.
Background
The navigation software in the electronic device is used frequently, for example, before a user travels, the navigation software searches for a route as a reference to go to a destination.
In some application scenarios, the number of origin and destination is more than one. For example, the driver needs to get to the passenger at a different location, go to the same destination together, and first search for a navigation route based on the current location with the location of the first passenger as the destination; after receiving the first passenger, searching a navigation route by taking the position of the second passenger as a destination based on the current position; and searching the navigation route by taking the final going position of all passengers as a destination based on the current position until all passengers are received.
Therefore, in the prior art, the number of the departure places and the destinations is large, so that the user needs to search the navigation route frequently, and the operation is complicated.
Disclosure of Invention
The embodiment of the application aims to provide a navigation route generation method, a terminal, electronic equipment and a medium, which can solve the problem of complicated operation during navigation route searching.
In a first aspect, an embodiment of the present application provides a navigation route generating method, including: receiving a first input of a user while displaying at least one first image; generating a navigation route according to at least one second image in the at least one first image in response to the first input; wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
In a second aspect, an embodiment of the present application provides a terminal, including: a first receiving module for receiving a first input of a user in the case of displaying at least one first image; a generation module for generating a navigation route from at least one second image of the at least one first image in response to the first input; wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In an embodiment of the present application, in a scene where a navigation route is generated, at least one first image is displayed, each first image indicating at least one position, for example, a position indicated by a certain first image may be a boarding position of one passenger. Further, the user triggers to generate a navigation route through the first input, so that the user selects at least one first image to be displayed, the selected first image is a second image, the number of the second images is not limited, the electronic equipment generates the navigation route based on the positions indicated by the second images, and the generated navigation route passes through the positions indicated by the second images. The embodiment of the application can quickly generate the navigation route based on the positions provided by the pictures, does not need a user to input destination search navigation routes one by one in navigation software, and greatly simplifies the operation steps of the user.
Drawings
FIG. 1 is a flow chart of a navigation route generation method of an embodiment of the present application;
FIG. 2 is one of the display schematic diagrams of the electronic device according to the embodiment of the application;
FIG. 3 is a second schematic diagram of an electronic device according to an embodiment of the application;
FIG. 4 is a third schematic diagram of an electronic device according to an embodiment of the application;
FIG. 5 is a fourth schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 6 is a fifth schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 10 is a diagram of a display of an electronic device according to an embodiment of the present application;
FIG. 11 is a block diagram of a terminal according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application;
fig. 13 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The navigation route generating method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
FIG. 1 shows a flow chart of a navigation route generation method according to one embodiment of the application, as applied to an electronic device for example, the method comprising:
step 110: a first input of a user is received with the at least one first image displayed.
The first input includes a touch input made by a user on a screen, and is not limited to click, slide, drag, and other inputs. The first input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the first input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
In this step, a first input is used by the user to select at least one of the displayed at least one first image as a second image and generate a navigation route based on the selected at least one second image.
Wherein each first image indicates at least one location.
In some embodiments, the first image includes at least one location identifier, one location identifier indicating one location.
In some embodiments, the location identifier carries location information, which may be a street name, business name, building name, latitude and longitude, and the like.
In some embodiments, at least one first image is displayed in a conversation window, and the at least one first image is sent by chat members participating in the conversation. The session window may be a friend session window or a group chat session window.
When the application scene is a multi-person riding, the multi-person can respectively send the positions of the multi-person riding to the conversation window in the form of the first image.
In some embodiments, the user saves in advance some images indicating the location in the album, so that the user selects these pre-saved images from the album as the first image in the scene that triggered the generation of the navigation route.
For example, referring to fig. 2, three users a, B and C ("i") want to go to their own tour, first, a chat group is created, named "own tour group", and the three users as group members may send location screenshots (e.g., "a location screenshots" 201) indicating their locations to the chat group, respectively. Further, referring to fig. 3, user C presses any screenshot for a long time, all the shots enter a selectable state, user C clicks the screenshot corresponding to circle control 301, and a "hook-to-hook" symbol is displayed in circle control 301, so as to complete the screenshot selection. Further, after the user C presses any screenshot for a long time, a "serial navigation" control 302 is displayed, and after the user C finishes selecting the screenshot, the "serial navigation" control 302 is clicked.
Step 120: in response to the first input, a navigation route is generated from at least one second image of the at least one first image.
Wherein the navigation route passes through the location indicated by the at least one second image.
In some embodiments, the navigation route is generated with one position indicated by one second image as a start position, at least one position of the other second image as an end position, and the rest positions as halfway through positions.
In some embodiments, the navigation route is multiple for the user to select the best route.
For example, referring to fig. 4, the controls corresponding to the three navigation routes are displayed, and the user clicks any control, such as the third control 401, to display the navigation route indicated by the control.
In an embodiment of the present application, in a scene where a navigation route is generated, at least one first image is displayed, each first image indicating at least one position, for example, a position indicated by a certain first image may be a boarding position of one passenger. Further, the user triggers to generate a navigation route through the first input, so that the user selects at least one first image to be displayed, the selected first image is a second image, the number of the second images is not limited, the electronic equipment generates the navigation route based on the positions indicated by the second images, and the generated navigation route passes through the positions indicated by the second images. The embodiment of the application can quickly generate the navigation route based on the positions provided by the pictures, does not need a user to input destination search navigation routes one by one in navigation software, and greatly simplifies the operation steps of the user.
In a navigation route generation method according to another embodiment of the present application, at least one second image includes a first sub-image and a second sub-image, the first sub-image indicating a first position, and the second sub-image indicating a second position.
In this embodiment, the user selects at least two second images through the first input, and the first sub-image and the second sub-image are taken as examples for explanation.
In the flow of the present embodiment, step 110 includes:
substep A1: a first sub-input of a user to a first sub-image is received, and a second sub-input of a user to a second sub-image is received.
Step 120, including:
substep A2: generating a navigation route based on the first location and the second location; the navigation route has an association relationship with the input sequence of the first sub-input and the second sub-input.
In some embodiments, the input order of the first sub-input and the second sub-input is used to define an order of navigation route through the first location indicated by the first sub-image and the second location indicated by the second sub-image.
Wherein the first sub-input is used for selecting one of the at least one first image as a first sub-image; the second sub-input is used to select one of the at least one second image as a second sub-image.
In some embodiments, to facilitate a user to confirm the order in which the navigation route passes through the first location and the second location, after the user selects the first sub-image, a corresponding sequence number is displayed at the first sub-image; and after the user selects the second sub-image, displaying the corresponding serial number at the second sub-image.
For example, referring to fig. 3, the user sequentially selects "my location screenshot", "a location screenshot", and "B location screenshot", and accordingly, next to the three shots, "i, ii, and iii", respectively, are displayed, representing the first, second, and third ranks, respectively.
Application scenarios, such as user selection A, B, C of different orders of three positions, can be generatedC is formed into A B、C B A、B C A、B A C and the like.
In this embodiment, the user may arrange different position sequences according to the requirement, so that different navigation routes may be generated, and the navigation route is made to fit the scene requirement while simplifying the user operation.
In the flow of the navigation route generating method according to another embodiment of the present application, before step 110, the method further includes:
step B1: and displaying a navigation application interface, wherein the navigation application interface comprises a first positioning identifier.
The first positioning identifier is used for indicating a position in the navigation application interface, and the first positioning identifier carries specific positioning information.
For example, a user opens a navigation application, displays a navigation application interface, and the navigation application interface includes a first positioning identifier, where the first positioning identifier indicates current positioning information of the user.
Step B2: a second input of the user is received.
The second input includes touch input made by the user on the screen, and is not limited to click, slide, drag and other inputs. The second input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the second input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step B3: and responding to the second input, obtaining a third image based on the navigation application interface, wherein the third image comprises a first positioning identifier, and the position indicated by the third image comprises a third position indicated by the first positioning identifier.
Wherein the at least one first image comprises a third image.
In the step, the second input is used for triggering a screenshot function by a user to intercept an image corresponding to the navigation application interface, and a third image is obtained.
For example, in the case of displaying a navigation application interface, referring to fig. 5, the user pulls down a status bar to click on a "navigation screenshot" icon 501, generating a screenshot, and obtaining a third image.
Correspondingly, in the present embodiment, the third image includes a first positioning identifier, which indicates the third position. After the user has captured the screen, a third image may be sent into the session window.
For example, referring to FIG. 6, after the third image is obtained, a "de-share" control 601 is displayed and the user points to the "de-share" control 601, enter the chat application, select a session, and send the third image to the session.
In this embodiment, the electronic device supports a navigation screenshot function, and under the condition that the navigation screenshot function is started, an image corresponding to the navigation application interface can be intercepted, so that the image includes a positioning identifier.
In the flow of the navigation route generating method according to another embodiment of the present application, before step B2, the method further includes:
step C1: a third input of a user to the first location identity is received.
The third input includes touch input made by the user on the screen, and is not limited to click, slide, drag, and other inputs. The third input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the third input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step C2: in response to the third input, a second location identifier is displayed.
In this step, the third input is used to trigger the display of the second location identifier based on the displayed first location identifier in the case of displaying the navigation application interface.
In some embodiments, the second location identity is a separate identity from the first location identity.
For example, the navigation application interface displays a first location identifier, the user presses the first location identifier for a long time, a "new location" control appears, the user clicks the "new location" control, and a second location identifier is displayed.
Step C3: a fourth input of the user to the second location identity is received.
The fourth input includes a touch input made by the user on the screen, and is not limited to click, slide, drag, and other inputs. The fourth input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the fourth input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step C4: determining, in response to the fourth input, a fourth location indicated by the second location identity; wherein the fourth position is different from the third position.
The positions indicated by the third image comprise a third position and a fourth position; in the case that the at least one second image comprises a third image, the navigation route passes through the third location and the fourth location.
In this step, the fourth input is used to select a corresponding location at the navigation application interface by the user based on the displayed second location identifier.
For example, in the case of displaying the second positioning mark, the second positioning mark is moved to a position so that the second positioning mark is displayed at the position.
Wherein the user may trigger the display of the second location identifier multiple times to select multiple locations in the navigation application interface.
Further, after the user triggers the screenshot function, the intercepted third image comprises the first positioning identifier and the second positioning identifier, namely the third image indicates a third position and a fourth position.
In some embodiments, based on the third position and the fourth position indicated by the third image, the generated navigation route is made to pass through the third position and the fourth position halfway, and the passing sequence may refer to the input sequence of the first positioning identifier by the user and the second positioning identifier by the user, for example, the second positioning identifier is displayed based on the first positioning identifier, and then the input sequence of the first positioning identifier is prior to the input sequence of the second positioning identifier, so that in the navigation route, the third position is passed before the fourth position is passed.
In this embodiment, an interaction manner is provided, so that the number of positions indicated by the third image is multiple, and when a navigation route is generated, the navigation route can sequentially pass through multiple positions indicated by the third image, thereby achieving the purpose of enriching the navigation application scene.
In another embodiment of the navigation route generation method of the present application, the first input includes a third sub-input for selecting at least one second image and a fourth sub-input that is an input to a first control that indicates that a navigation route is generated through the at least one second image.
In this embodiment, the first control is configured to receive a fourth sub-input, the fourth sub-input being configured to trigger generation of a navigation route by a user.
For example, referring to FIG. 3, the "series navigation" control 302 corresponds to the first control.
In this embodiment, a method for triggering generation of a navigation route is provided, and after the user selects at least one second image through the third sub-input, the electronic device may generate the navigation route based on the at least one second image by performing a fourth sub-input on the first control. The interaction mode provided by the embodiment is simpler, and the purpose of simplifying user operation is achieved.
In the flow of the navigation route generating method according to another embodiment of the present application, before step 110, the method further includes:
step D1: a fifth input is received from the user.
The fifth input includes a touch input made by the user on the screen, and is not limited to click, slide, drag, and other inputs. The fifth input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the fifth input comprises one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step D2: in response to the fifth input, at least one first image is displayed in the first screen area and a navigation interface is displayed in the second screen area.
In this step, a fifth input is used to trigger the screen to display the first screen region and the second screen region.
For example, referring to FIG. 2, the user presses any one of the shots, such as "My position screenshot" 202, referring to FIG. 7, the screen displays a first screen region 701 and a second screen region 702.
The first screen area is used for displaying at least one first image, and the second screen area is used for displaying a navigation interface.
Substep D3: an input is received from a user dragging at least one second image of the at least one first image from the first screen area to the second screen area.
In this step, the user selects among at least one first image displayed in the first screen area and drags to the second screen area. Wherein the first image, i.e. the second image, is dragged by the user.
In some embodiments, the order of dragging is used to define the order of passage of the positions indicated by the dragged second images in the navigation route.
For example, referring to FIG. 8, the user drags "A-position screenshot" 801 to the second screen region 802, whereby the navigation interface displays a navigation route past the A-position.
For another example, referring to fig. 9, the user drags "a-position screenshot" 901 to the second screen area 902 and drags "B-position screenshot" 903 to the second screen area 902, so that the navigation route displayed by the navigation interface passes through the a-position and then passes through the B-position.
In this embodiment, a quick interaction mode is provided, and a user may trigger a split screen mode, in which the user may directly drag any first image displayed in one screen area to a navigation interface displayed in another screen area, so that selection of at least one second image may be quickly completed.
In the flow of the navigation route generating method according to another embodiment of the present application, after step D2, the method further includes:
step E1: a sixth input from the user to the navigation interface is received.
The sixth input includes a touch input made by the user on the screen, and is not limited to a click, a slide, a drag, or the like input. The sixth input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press, etc. Moreover, the sixth input includes one or more inputs, wherein the plurality of inputs may be continuous or time-spaced.
Step E2: in response to the sixth input, a navigation start position or a navigation end position is determined.
In this step, a sixth input is used for the user to input a navigation start position or a navigation end position at the navigation interface.
Under an application scene, a user actively searches a navigation starting position in a navigation interface, and the position indicated by at least one second image selected by the user is sequentially used as a passing position and a navigation ending position.
In another application scene, the user actively searches for a navigation ending position in the navigation interface, and the position indicated by at least one second image selected by the user is sequentially used as a navigation starting position and a passing position.
In another application scene, the user actively searches the navigation starting position and the navigation ending position in the navigation interface, and the position indicated by at least one second image selected by the user is sequentially used as a passing position.
For example, referring to fig. 10, after the user selects at least one second image, a navigation interface is displayed, and the user edits positioning information as at least one of a navigation start position and a navigation end position in an edit box 10001 of "search place" of the navigation interface.
In some embodiments, in a scenario where the user actively searches for a navigation start position and a navigation end position, the number of navigation start positions and navigation end positions is not limited, when there are a plurality of navigation start positions, the first input navigation start position is reserved, and the rest of path positions are used as navigation routes; when there are multiple navigation ending positions, the last input navigation ending position is reserved, and the rest of navigation ending positions are used as route positions of the navigation route.
For example, after editing a piece of positioning information, the user clicks the "add" control provided by the edit box, and then edits a piece of positioning information.
For another example, the user sequentially edits a plurality of positioning information in an edit box.
In this embodiment, the user may customize the navigation start position or the navigation end position, so that the application of the navigation route generation method is more flexible, and thus, in various complex scenarios, a shortcut for generating the navigation route may be provided for the user, so as to meet the requirements of the user in different scenarios.
In another embodiment of the present application, in an application scenario, a user selects at least two second images, where positions indicated by the two second images may be determined as a navigation start position and a navigation end position, respectively, according to an input order of the user. For example, the position indicated by the first second image selected by the user is determined as a navigation start position, and the position indicated by the last second image selected by the user is determined as a navigation end position; for another example, the first position of the first second image selected by the user is determined to be the navigation start position, and the last position of the last second image selected by the user is determined to be the navigation end position.
In this embodiment, the user may select the navigation start position or the navigation end position from the positions indicated by the second image, which not only can satisfy the user-defined requirement of the user on the navigation start position or the navigation end position, but also simplifies the operation of searching the navigation start position or the navigation end position in the navigation interface.
In the flow of the navigation route generation method according to another embodiment of the present application, after step 120, the method further includes:
step F1: a seventh input by the user to a fifth location in the navigation route is received.
The seventh input includes a touch input made by the user on the screen, and is not limited to a click, a slide, a drag, or the like input. The seventh input may also be a blank input of the user, such as a gesture action, a facial action, etc., and further includes an input of the user to a physical key on the device, not limited to a press or the like. Moreover, the seventh input comprises one or more inputs, wherein the plurality of inputs may be consecutive or time-spaced.
Step F2: in response to the seventh input, the navigation route is updated, and the updated navigation route does not pass through the fifth location.
In this step, a seventh input is for the user to delete the fifth location traversed by the navigation route based on the generated navigation route. For example, after the user long presses the fifth position in the navigation route and drags to the designated direction, the navigation route does not pass through the fifth position.
In this embodiment, based on the automatically generated navigation route, a path manually adjusted by the user is provided, so that the generated navigation route more conforms to the actual requirement of the user.
In summary, in the navigation route generation method provided by the application, the steps of route generation are changed from multiple steps to multiple steps, the interaction depth is low, and the purposes of simplifying operation, reducing interaction difficulty and reducing learning cost are achieved. After entering the navigation application for one screenshot, a plurality of users can quickly generate a navigation route based on the screenshot, and a certain user is not required to repeatedly search places and insert route points. The method and the device are suitable for multiple scenes such as single destination, single multi-destination, multi-person same destination, multi-person multi-destination and the like, so that optimization is performed based on different navigation scenes, route generation speed is improved, navigation operation difficulty is reduced, and user experience is improved.
According to the navigation route generation method provided by the embodiment of the application, the execution main body can be a terminal. In the embodiment of the application, a method for generating a navigation route executed by a terminal is taken as an example, and the terminal provided by the embodiment of the application is described.
Fig. 11 shows a block diagram of a terminal according to an embodiment of the application, the terminal comprising:
a first receiving module 10 for receiving a first input of a user in case of displaying at least one first image;
a generation module 20 for generating a navigation route from at least one second image of the at least one first image in response to the first input;
Wherein each first image indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
In an embodiment of the present application, in a scene where a navigation route is generated, at least one first image is displayed, each first image indicating at least one position, for example, a position indicated by a certain first image may be a boarding position of one passenger. Further, the user triggers to generate a navigation route through the first input, so that the user selects at least one first image to be displayed, the selected first image is a second image, the number of the second images is not limited, the electronic equipment generates the navigation route based on the positions indicated by the second images, and the generated navigation route passes through the positions indicated by the second images. The embodiment of the application can quickly generate the navigation route based on the positions provided by the pictures, does not need a user to input destination search navigation routes one by one in navigation software, and greatly simplifies the operation steps of the user.
In some embodiments, the at least one second image includes a first sub-image and a second sub-image, the first sub-image indicating the first location and the second sub-image indicating the second location;
The first receiving module 10 includes:
a first receiving unit for receiving a first sub-input of a first sub-image by a user and a second sub-input of a second sub-image by the user;
the generating module 20 includes:
a generation unit configured to generate a navigation route based on the first location and the second location; the navigation route has an association relationship with the input sequence of the first sub-input and the second sub-input.
In some embodiments, the terminal further comprises:
the first display module is used for displaying a navigation application interface, and the navigation application interface comprises a first positioning identifier;
the second receiving module is used for receiving a second input of a user;
the screenshot module is used for responding to the second input, obtaining a third image based on the navigation application interface, wherein the third image comprises a first positioning identifier, and the position indicated by the third image comprises a third position indicated by the first positioning identifier;
wherein the at least one first image comprises a third image.
In some embodiments, the terminal further comprises:
the third receiving module is used for receiving a third input of the first positioning identifier by a user;
the second display module is used for responding to the third input and displaying a second positioning identifier;
The fourth receiving module is used for receiving a fourth input of the second positioning identifier from the user;
the first determining module is used for responding to a fourth input and determining a fourth position indicated by the second positioning identification; wherein the fourth position is different from the third position;
the positions indicated by the third image comprise a third position and a fourth position; in the case that the at least one second image comprises a third image, the navigation route passes through the third location and the fourth location.
In some embodiments, the first input includes a third sub-input for selecting the at least one second image and a fourth sub-input that is an input to a first control that indicates a navigation route generated by the at least one second image.
In some embodiments, the terminal further comprises:
a fifth receiving module for receiving a fifth input of the user;
the third display module is used for responding to the fifth input, displaying at least one first image in a first screen area and displaying a navigation interface in a second screen area;
the first receiving module 10 includes:
and a second receiving unit for receiving an input of a user dragging at least one second image of the at least one first image from the first screen area to the second screen area.
In some embodiments, the terminal further comprises:
the sixth receiving module is used for receiving a sixth input of a user to the navigation interface;
and a second determination module for determining a navigation start position or a navigation end position in response to the sixth input.
In some embodiments, the terminal further comprises:
a seventh receiving module, configured to receive a seventh input from a user for a fifth location in the navigation route;
and the updating module is used for responding to the seventh input and updating the navigation route, and the updated navigation route does not pass through the fifth position.
The terminal in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The terminal of the embodiment of the application can be a terminal with an action system. The action system may be an Android (Android) action system, an ios action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The terminal provided by the embodiment of the application can realize the processes realized by the embodiment of the method and the same technical effects, and is not repeated here.
In some embodiments, as shown in fig. 12, an electronic device 100 is further provided in the embodiments of the present application, which includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of being executed on the processor 101, where the program or the instruction implements each step of any one of the embodiments of the navigation route generation method when executed by the processor 101, and the steps can achieve the same technical effect, and for avoiding repetition, a detailed description is omitted herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, processor 1010, camera 1011, and the like.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the user input unit 1007 is configured to receive a first input of a user in a case where at least one first image is displayed; a processor 1010 for generating a navigation route from at least one second image of the at least one first image in response to the first input; wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
In an embodiment of the present application, in a scene where a navigation route is generated, at least one first image is displayed, each first image indicating at least one position, for example, a position indicated by a certain first image may be a boarding position of one passenger. Further, the user triggers to generate a navigation route through the first input, so that the user selects at least one first image to be displayed, the selected first image is a second image, the number of the second images is not limited, the electronic equipment generates the navigation route based on the positions indicated by the second images, and the generated navigation route passes through the positions indicated by the second images. The embodiment of the application can quickly generate the navigation route based on the positions provided by the pictures, does not need a user to input destination search navigation routes one by one in navigation software, and greatly simplifies the operation steps of the user.
In some embodiments, the at least one second image includes a first sub-image and a second sub-image, the first sub-image indicating a first location and the second sub-image indicating a second location; a user input unit 1007 further configured to receive a first sub-input of the first sub-image by a user and a second sub-input of the second sub-image by a user; a processor 1010 for generating the navigation route based also on the first location and the second location; the navigation route has an association relationship with the input sequence of the first sub-input and the second sub-input.
In some embodiments, the display unit 1006 is configured to display a navigation application interface, where the navigation application interface includes a first location identifier; a user input unit 1007 also for receiving a second input from a user; the processor 1010 is further configured to obtain, in response to the second input, a third image based on the navigation application interface, where the third image includes the first location identifier, and a location indicated by the third image includes a third location indicated by the first location identifier; wherein the at least one first image includes the third image.
In some embodiments, the user input unit 1007 is further configured to receive a third input of the first location identity by the user; a display unit 1006, further configured to display a second positioning identifier in response to the third input; a user input unit 1007, configured to receive a fourth input of the second location identifier from the user; the processor 1010 is further configured to determine, in response to the fourth input, a fourth location indicated by the second location identity; wherein the fourth position is different from the third position; the third image indicates a position including the third position and the fourth position; in the case that the at least one second image comprises the third image, the navigation route passes through the third location and the fourth location.
In some embodiments, the first input includes a third sub-input for selecting at least one second image and a fourth sub-input that is an input to a first control that indicates a navigation route generated by the at least one second image.
In some embodiments, the user input unit 1007 is further configured to receive a fifth input from the user; a display unit 1006, further configured to display the at least one first image in a first screen area and display a navigation interface in a second screen area in response to the fifth input; the user input unit 1007 is further configured to receive an input that a user drags at least one second image of the at least one first image from the first screen region to the second screen region.
In some embodiments, the user input unit 1007 is further configured to receive a sixth input from the user to the navigation interface; the processor 1010 is further configured to determine a navigation start position or a navigation end position in response to the sixth input.
In some embodiments, the user input unit 1007 is further configured to receive a seventh input by the user of the fifth location in the navigation route; the processor 1010 is further configured to update the navigation route in response to the seventh input, and the updated navigation route does not pass through the fifth location.
In summary, in the navigation route generation method provided by the application, the steps of route generation are changed from multiple steps to multiple steps, the interaction depth is low, and the purposes of simplifying operation, reducing interaction difficulty and reducing learning cost are achieved. After entering the navigation application for one screenshot, a plurality of users can quickly generate a navigation route based on the screenshot, and a certain user is not required to repeatedly search places and insert route points. The method and the device are suitable for multiple scenes such as single destination, single multi-destination, multi-person same destination, multi-person multi-destination and the like, so that optimization is performed based on different navigation scenes, route generation speed is improved, navigation operation difficulty is reduced, and user experience is improved.
It should be appreciated that in an embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or video images obtained by an image capturing device (e.g., a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes an action system, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the navigation route generation method embodiment, and can achieve the same technical effect, so that repetition is avoided, and no further description is provided here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the navigation route generation method embodiment can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the navigation route generation method embodiment described above, and achieve the same technical effects, and are not described herein in detail to avoid repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the method and the terminal in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order according to the functions involved, e.g., the described method may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (15)

1. A navigation route generation method, the method comprising:
receiving a first input of a user while displaying at least one first image;
generating a navigation route according to at least one second image in the at least one first image in response to the first input;
wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
2. The method of claim 1, wherein the at least one second image comprises a first sub-image and a second sub-image, the first sub-image indicating a first location and the second sub-image indicating a second location;
the receiving a first input from a user includes:
receiving a first sub-input of a user to the first sub-image and a second sub-input of a user to the second sub-image;
the generating a navigation route according to at least one second image in the at least one first image comprises the following steps:
generating the navigation route based on the first location and the second location; the navigation route has an association relationship with the input sequence of the first sub-input and the second sub-input.
3. The method of claim 1, wherein prior to the receiving the first input from the user, the method further comprises:
displaying a navigation application interface, wherein the navigation application interface comprises a first positioning identifier;
receiving a second input from the user;
responding to the second input, and obtaining a third image based on the navigation application interface, wherein the third image comprises the first positioning identifier, and the position indicated by the third image comprises a third position indicated by the first positioning identifier;
wherein the at least one first image includes the third image.
4. A method according to claim 3, wherein prior to said receiving the second input by the user, the method further comprises:
receiving a third input of a user to the first positioning identifier;
responsive to the third input, displaying a second location identifier;
receiving a fourth input of the second positioning identifier by a user;
determining a fourth location indicated by the second location identity in response to the fourth input; wherein the fourth position is different from the third position;
the third image indicates a position including the third position and the fourth position; in the case that the at least one second image comprises the third image, the navigation route passes through the third location and the fourth location.
5. The method of claim 1, wherein the first input comprises a third sub-input for selecting at least one second image and a fourth sub-input that is an input to a first control that indicates a navigation route to be generated through the at least one second image.
6. The method of claim 1, wherein prior to the receiving the first input from the user, the method further comprises:
receiving a fifth input of the user;
displaying the at least one first image in a first screen area and displaying a navigation interface in a second screen area in response to the fifth input;
the receiving a first input from a user includes:
an input is received that a user drags at least one second image of the at least one first image from the first screen area to the second screen area.
7. The method of claim 6, wherein after displaying the navigation interface in the second screen area, the method further comprises:
receiving a sixth input of a user to the navigation interface;
in response to the sixth input, a navigation start position or a navigation end position is determined.
8. The method of claim 1, wherein after the generating the navigation route, the method further comprises:
receiving a seventh input of a user to a fifth location in the navigation route;
in response to the seventh input, the navigation route is updated, and the updated navigation route does not pass through the fifth location.
9. A terminal, the terminal comprising:
a first receiving module for receiving a first input of a user in the case of displaying at least one first image;
a generation module for generating a navigation route from at least one second image of the at least one first image in response to the first input;
wherein each of the first images indicates at least one location and the navigation route passes through the location indicated by the at least one second image.
10. The terminal of claim 9, wherein the at least one second image comprises a first sub-image and a second sub-image, the first sub-image indicating a first location and the second sub-image indicating a second location;
the first receiving module includes:
a first receiving unit for receiving a first sub-input of a user to the first sub-image and a second sub-input of a user to the second sub-image;
The generating module comprises:
a generation unit configured to generate the navigation route based on the first location and the second location; the navigation route has an association relationship with the input sequence of the first sub-input and the second sub-input.
11. The terminal according to claim 9, characterized in that the terminal further comprises:
the first display module is used for displaying a navigation application interface, and the navigation application interface comprises a first positioning identifier;
the second receiving module is used for receiving a second input of a user;
the screenshot module is used for responding to the second input, obtaining a third image based on the navigation application interface, wherein the third image comprises the first positioning identifier, and the position indicated by the third image comprises a third position indicated by the first positioning identifier;
wherein the at least one first image includes the third image.
12. The terminal of claim 11, wherein the terminal further comprises:
the third receiving module is used for receiving a third input of the first positioning identifier by a user;
the second display module is used for responding to the third input and displaying a second positioning identifier;
The fourth receiving module is used for receiving a fourth input of the second positioning identifier by a user;
a first determining module configured to determine a fourth location indicated by the second location identity in response to the fourth input; wherein the fourth position is different from the third position;
the third image indicates a position including the third position and the fourth position; in the case that the at least one second image comprises the third image, the navigation route passes through the third location and the fourth location.
13. The terminal of claim 9, wherein the first input includes a third sub-input for selecting at least one second image and a fourth sub-input that is an input to a first control that indicates a navigation route to be generated through the at least one second image.
14. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the navigation route generation method of any of claims 1 to 8.
15. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the navigation route generation method according to any one of claims 1 to 8.
CN202311038053.8A 2023-08-16 2023-08-16 Navigation route generation method, terminal, electronic device and readable storage medium Pending CN116929400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311038053.8A CN116929400A (en) 2023-08-16 2023-08-16 Navigation route generation method, terminal, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311038053.8A CN116929400A (en) 2023-08-16 2023-08-16 Navigation route generation method, terminal, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN116929400A true CN116929400A (en) 2023-10-24

Family

ID=88375380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311038053.8A Pending CN116929400A (en) 2023-08-16 2023-08-16 Navigation route generation method, terminal, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN116929400A (en)

Similar Documents

Publication Publication Date Title
CN112269508B (en) Display method and device and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN113918522A (en) File generation method and device and electronic equipment
CN114518822A (en) Application icon management method and device and electronic equipment
WO2024088209A1 (en) Position information acquisition method and apparatus
CN112836142A (en) Aggregation point processing method based on position sharing
CN114374663B (en) Message processing method and message processing device
CN116929400A (en) Navigation route generation method, terminal, electronic device and readable storage medium
CN115167721A (en) Display method and device of functional interface
CN113268961A (en) Travel note generation method and device
CN114374761A (en) Information interaction method and device, electronic equipment and medium
CN112783998A (en) Navigation method and electronic equipment
CN113037618B (en) Image sharing method and device
CN112764632B (en) Image sharing method and device and electronic equipment
CN115586937A (en) Interface display method and device, electronic equipment and readable storage medium
WO2024022432A1 (en) Information processing methods and apparatus, and electronic device
CN116069429A (en) Application processing method, device, electronic equipment and medium
CN114356164A (en) Sharing method and sharing device
CN117407135A (en) Task execution method and device and electronic equipment
CN115720219A (en) Group creation method and device, electronic equipment and medium
CN116708336A (en) Message sending method and device
CN115987928A (en) Message processing method and device
CN117234649A (en) Information display method and device and electronic equipment
CN115696290A (en) Information sharing method and device, electronic equipment and readable storage medium
CN114090818A (en) Navigation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination