CN116069887A - Information processing method, intelligent terminal and storage medium - Google Patents

Information processing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN116069887A
CN116069887A CN202310094717.6A CN202310094717A CN116069887A CN 116069887 A CN116069887 A CN 116069887A CN 202310094717 A CN202310094717 A CN 202310094717A CN 116069887 A CN116069887 A CN 116069887A
Authority
CN
China
Prior art keywords
information
search
service
map
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310094717.6A
Other languages
Chinese (zh)
Inventor
徐鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202310094717.6A priority Critical patent/CN116069887A/en
Publication of CN116069887A publication Critical patent/CN116069887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24532Query optimisation of parallel queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Remote Sensing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides an information processing method, an intelligent terminal and a storage medium, wherein the method comprises the following steps: searching in a preset map in response to obtaining at least two target retrieval information; and simultaneously displaying the search results corresponding to the target search information on a map. Through the technical scheme, a user can meet a plurality of search requirements through one-time search in the process of searching the map, the convenience of searching by using the map application is improved, and the user experience is improved.

Description

Information processing method, intelligent terminal and storage medium
Technical Field
The application relates to the technical field of intelligent terminals, in particular to an information processing method, an intelligent terminal and a storage medium.
Background
With the development of intelligent terminal technology, the functions of the intelligent terminal are more and more, with the development of electronic map technology, users more and more inquire, search, navigate and the like by means of the electronic map in the intelligent terminal, and meanwhile, users also put forward more demands on diversified applications of the electronic map, so that more and more applications are combined with the electronic map to provide more comprehensive services, such as takeaway applications, renting applications and the like.
In the process of designing and implementing the present application, the inventors found that at least the following problems exist: the method is limited by the screen size of the intelligent terminal, map information display and target search near a single address can be generally only carried out in various applications at present, if a user wants to simultaneously carry out target search near a plurality of addresses, the same target search operation needs to be carried out on each address, after the search result of one address is checked, the search result of the other address needs to be checked after the other address is input again through the return address input interface, the operation is complicated, the display of the search result is respectively carried out after each search is completed, the user can only carry out comparison of the search results of a plurality of addresses through memorization, the method is quite inconvenient, and more repeated operations are easy to cause because of memory deviation, and the method is quite inconvenient.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Aiming at the technical problems, the application provides the information processing method, the intelligent terminal and the storage medium, so that a user can meet a plurality of search requirements through one search in the process of searching the map, and the convenience of searching by using the map application is improved.
The application provides an information processing method which can be applied to an intelligent terminal and comprises the following steps:
s10: searching in a preset map in response to obtaining at least two target retrieval information;
s20: and simultaneously displaying the search results corresponding to the target search information on a map.
Optionally, the step S20 includes:
acquiring or determining at least two target map areas according to the retrieval results corresponding to the at least two target retrieval information;
and adjusting the at least two target map areas to be in a preset display area.
Optionally, the distance between at least one target map area and the boundary of the preset display area is smaller than a preset distance threshold.
Optionally, the step S10 includes the steps of:
s11: determining or generating at least two target retrieval information in response to acquiring the at least one address information and the at least one service information;
s12: searching in a preset map based on the target retrieval information.
Optionally, the service information includes at least one of a service type and a service object.
Optionally, the service type includes at least one of a nearby location search, a path planning, and a ground scene matching.
Optionally, before the step S11, the method further includes: and in response to acquiring the voice instruction of the user, confirming or acquiring at least one address information and at least one service information according to the voice instruction of the user.
Optionally, the voice instruction includes a first voice instruction and/or a second voice instruction.
Optionally, the step S10 includes:
the method comprises the steps of obtaining a first voice instruction and a second voice instruction.
Optionally, the step S11 includes:
determining a corresponding relation between each piece of service information and each piece of address information in response to acquiring at least one piece of address information and at least one piece of service information;
and determining or generating target retrieval information according to the corresponding relation between the service information and the address information.
Optionally, after the step S20, the method further includes:
acquiring combined service information in response to a selection operation of selecting at least one search result to be combined from the search results;
and determining or generating new target retrieval information according to the combination service information and each search result to be combined, and returning to execute the step S10.
Optionally, after the step S20, the method further includes:
and displaying (such as grouping display) each search result in a preset display area.
Optionally, the obtaining the combined service information in response to a selection operation of selecting at least one search result to be combined from the search results includes:
outputting and displaying a combined service information input box in response to a selection operation of selecting at least one search result to be combined from among the search results;
and acquiring the combined service information.
The application also provides an intelligent terminal, including: the information processing device comprises a memory and a processor, wherein the memory stores an information processing method program, and the information processing method program realizes the steps of any one of the information processing methods when being executed by the processor.
The present application also provides a storage medium storing a computer program which, when executed by a processor, implements the steps of any of the information processing methods described above.
As described above, the information processing method provided in the present application includes: searching in a preset map in response to obtaining at least two target retrieval information; and simultaneously displaying the search results corresponding to the target search information on a map. Through the technical scheme, a user can input two or more than two search requirements at the same time, when the map application simultaneously acquires two or more than two target search information representing the search requirements of the user, the map application responds to acquiring at least two target search information, searches on a preset map at the same time, and simultaneously displays the search results corresponding to the target search information on the map, so that the user can intuitively know the search results corresponding to each search requirement on the map at the same time, the situation that after searching for single target search information, the information such as distance, position and the like cannot be intuitively compared is effectively avoided, and the situation that the comparison can only be carried out in mind through memorizing and imagining is solved, the problem that the convenience of searching by using the map application based on multiple addresses or multiple services is poor is solved, the decision making efficiency and accuracy of the user are improved, convenience is provided for the user, and user experience is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a schematic diagram of a communication network system according to an embodiment of the present application;
FIG. 3 is a flowchart of a first embodiment of an information processing method according to the present application;
FIG. 4 is a flowchart of a second embodiment of an information processing method according to the present application;
FIG. 5 is a schematic diagram of a display interface of an address input box and a service input box in the present application;
FIG. 6 is a schematic diagram of a display interface of the present application after hiding an address input box and a service input box;
fig. 7 is a flowchart of a fourth embodiment of the information processing method of the present application.
Reference numerals illustrate:
Reference numerals Name of the name Reference numerals Name of the name
1 Address input box 2 Service input box
3 Search button
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context. Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or," "and/or," "including at least one of," and the like, as used herein, may be construed as inclusive, or meaning any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be noted that, in this document, step numbers such as S10 and S20 are adopted, and the purpose of the present invention is to more clearly and briefly describe the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S20 first and then execute S10 when implementing the present invention, which is within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
The intelligent terminal may be implemented in various forms. For example, the smart terminals described in the present application may include smart terminals such as cell phones, tablet computers, notebook computers, palm computers, personal digital assistants (Persona l Di gita lAss i stant, PDA), portable media players (Portab l e Med i a P l ayer, PMP), navigation devices, wearable devices, smart bracelets, pedometers, and stationary terminals such as digital TVs, desktop computers, and the like.
The following description will be given taking a mobile terminal as an example, and those skilled in the art will understand that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for a moving purpose.
Referring to fig. 1, which is a schematic hardware structure of a mobile terminal implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Rad i o Frequency ) unit 101, wi Fi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111, and the like. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be used for receiving and transmitting signals during the information receiving or communication process, specifically, after receiving downlink information of the base station, processing the downlink information by the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, GSM (G l oba l System of Mobi l e commun icat ion, global system for mobile communications), GPRS (Genera l Packet Rad io Service ), CDMA2000 (Code Divi s ion Mu lt ip l e Access, 2000, CDMA 2000), WCDMA (Wideband Code Divi s ion Mu lt ip l e Access ), TD-SCDMA (Time Divi s ion-Synchronous Code Divi s ion Mu lt ip l e Access, time division synchronous code division multiple access), FDD-LTE (Frequency Divi s i on Dup l exi ng-Long Term Evo l ut ion, frequency division duplex long term evolution), TDD-LTE (Time Divi s ion Dup l exi ng-Long Term Evo l ut ion, time division duplex long term evolution), and 5G, among others.
WiFi belongs to a short-distance wireless transmission technology, and a mobile terminal can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 102, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 102, it is understood that it does not belong to the necessary constitution of a mobile terminal, and can be omitted entirely as required within a range that does not change the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 100. The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive an audio or video signal. The a/V input unit 104 may include a graphics processor (Graph ics Process i ng Un it, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound into audio data. The processed audio (voice) data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting the audio signal.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, optionally, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; as for other sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured in the mobile phone, the detailed description thereof will be omitted.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Li quid Crysta l Di sp l ay, LCD), an organic Light-emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects the touch azimuth of the user, detects a signal brought by touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 110, and can receive and execute commands sent from the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. Alternatively, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc., as specifically not limited herein.
Alternatively, the touch panel 1071 may overlay the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or thereabout, the touch panel 1071 is transferred to the processor 110 to determine the type of touch event, and the processor 110 then provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, and alternatively, the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, the application processor optionally handling mainly an operating system, a user interface, an application program, etc., the modem processor handling mainly wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based will be described below.
Referring to fig. 2, fig. 2 is a schematic diagram of a communication network system provided in an embodiment of the present application, where the communication network system is an LTE system of a general mobile communication technology, and the LTE system includes a UE (User equipment) 201, an e-UTRAN (Evo l ved UMTS Terrestr i a l Rad i oAccess Network ) 202, an epc (Evo l ved Packet Core, evolved packet core) 203, and a I P service 204 of an operator that are sequentially connected in communication.
Alternatively, the UE201 may be the terminal 100 described above, which is not described here again.
The E-UTRAN202 includes eNodeB2021 and other eNodeB2022, etc. Alternatively, the eNodeB2021 may connect with other enodebs 2022 over a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, the eNodeB2021 may provide access for the UE201 to the EPC 203.
EPC203 may include MME (Mob i l ity Management Ent ity ) 2031, hss (Home Subscr iber Server, home subscriber server) 2032, other MMEs 2033, SGW (Servi ng Gate Way ) 2034, pgw (PDN Gate Way, packet data network gateway) 2035 and PCRF (Po l i cy and Chargi ng Ru l es Funct ion, policy and tariff function entity) 2036, and so on. Optionally, MME2031 is a control node that handles signaling between UE201 and EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location registers (not shown) and to hold user specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034 and PGW2035 may provide I P address assignment and other functions for UE201, PCRF2036 being a policy and charging control policy decision point for traffic data flows and I P bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
I P services 204 may include the internet, intranets, ims (I P Mu lt imed ia Subsystem ) or other I P services, etc.
Although the LTE system is described above as an example, it should be understood by those skilled in the art that the present application is not limited to LTE systems, but may be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g., 6G), etc.
Based on the above-mentioned mobile terminal hardware structure and communication network system, various embodiments of the present application are presented.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart illustrating a first embodiment of an information processing method according to the present application. In this embodiment, the information processing method of the present application may be applied to the above-mentioned intelligent terminal, and based on this, in this embodiment and other possible embodiments described below, the information processing method of the present application is described with the intelligent terminal as a scheme execution body.
In this embodiment, the information processing method of the present application includes the steps of:
s10: searching in a preset map in response to obtaining at least two target retrieval information;
in this embodiment, it should be noted that, the map refers to an electronic map, and the map may be applied to navigation applications, or may be applied to other functional applications that need to use a map, for example, take-away applications, rental-room applications, etc., to assist in improving the convenience of using the main functions of the application. The target retrieval information refers to related information for retrieving a service object in the map, and the service object can be a positioning point, a place, a path, a ground scene, a keyword and the like.
The method comprises the steps of continuously detecting user operation in the process of intelligent terminal operation or in the process of map related application operation, detecting whether target retrieval information is extracted from the user operation, determining a plurality of groups of service objects, service types and search ranges to be searched according to each target retrieval information in response to the at least two pieces of target retrieval information, and searching in a preset map after the plurality of groups of service objects, service types and search ranges to be searched are determined, so as to obtain retrieval results matched with each group of service objects, service types and search ranges, wherein the user operation can be voice operation, text information input operation, selection operation and the like.
Optionally, the step S10 includes the steps of:
s11: determining or generating at least two target retrieval information in response to acquiring the at least one address information and the at least one service information;
s12: searching in a preset map based on the target retrieval information.
In this embodiment, it should be noted that the address information is information about an area to be searched. If the initial address extracted from the detected user operation is a place, for example, an A1 restaurant, an A2 parking lot, an A3 office building, etc., the range of a preset searching radius from the initial address can be determined as the address information by taking the initial address as a circle center, alternatively, the preset searching radius can be preset according to actual conditions, and a self-defined value can also be obtained; if the initial address is not extracted from the detected user operation, the current positioning position can be used as the initial address, the range of a preset searching radius from the initial address is determined to be the address information by taking the initial address as a circle center, and the whole map can also be determined to be the address information; if the initial address extracted from the detected user operation is a plurality of points or a range of areas, for example, within a range of B1 meters from the A1 restaurant, within A2 parking lot and A3 office building, within a range of B2 meters from a path from the A2 parking lot to the A3 office building, or the like, the initial address may be directly determined as the address information. The service information refers to related information of a service object which can be searched in a map, alternatively, the service object can be a locating point, a place, a path, a ground scene, a keyword, and the like, that is, the service information includes a nearby place search, a path planning, a ground scene matching, and the like, and for example, the service information can be a restaurant search, a parking lot and movie theatre search, a path with minimum traffic light search, a place in a photo search, and the like.
The method comprises the steps of continuously detecting user operation in the process of running the intelligent terminal or in the process of running the map related application, detecting whether address information and/or service information are extracted from the user operation, determining or generating at least two target retrieval information based on each address information and each service information in response to the fact that at least one address information and at least one service information are obtained, searching in a preset map based on each target retrieval information respectively to obtain retrieval results corresponding to each target retrieval information, and optionally, each target retrieval information comprises at least one address information and at least one service information.
Optionally, the service information includes at least one of a service type and a service object.
Optionally, the service type includes at least one of a nearby location search, a path planning, and a ground scene matching.
In this embodiment, it should be noted that the service information includes at least one of a service type and a service object, where the service type includes at least one of a nearby location search, a path planning, and a ground scene matching, and the service object may be at least one of a location point, a location, a path, a ground scene, a keyword, and the like. If the service type in the service information is a nearby location search, the service object may be a location, a ground scene, a keyword, etc., for example, a nearby netred click point search, a nearby restaurant, a parking lot search, etc.; if the service type in the service information is path planning, a service object may or may not exist in the service information, for example, a path with the least traffic light is searched, a path with the fastest arrival is searched, and the like; and if the service type in the service information is ground scene matching, the service object is a ground scene picture.
S20: and simultaneously displaying the search results corresponding to the target search information on a map.
In this embodiment, after obtaining the search results corresponding to each piece of target search information, the search results may be further output and displayed, and when the search results corresponding to each piece of target search information are output and displayed, the search results corresponding to each piece of target search information are simultaneously displayed on a map, so that all the search results can be displayed on a display interface, which is convenient for a user to intuitively view and compare, optionally, the search results are specific and accurate places or paths, and the search results corresponding to each piece of target search information may be 0, 1 or more, if the search results are 0, which indicates that any search service object cannot be searched according to the target search information.
Optionally, after the step S20, the method further includes the steps of:
a10, acquiring combined service information in response to a selection operation of selecting at least one search result to be combined from the search results;
in this embodiment, when searching based on the map, the user often searches again based on the search result, for example, searches for restaurants near the a site, searches for movie theatres at the B site, and may also determine a restaurant a and a movie theatre B of a home from the search result, and hopefully can plan a path between the restaurants a to the movie theatres B.
After obtaining the search results corresponding to each set of search information, user operation can be continuously detected, at least one search result to be combined is selected from the search results, the search result to be combined selected by the user is determined based on the selection operation, and the combined service information input by the user based on the search result to be combined is acquired.
In one embodiment, the step a10 includes the steps of:
a11, outputting and displaying a combined service information input box in response to a selection operation of selecting at least one search result to be combined from the search results;
in this embodiment, after obtaining the search results corresponding to each set of search information, the user operation may be continuously detected, and in response to a selection operation of selecting at least one search result to be combined from the search results, the search result to be combined selected by the user is determined based on the selection operation, and a combined service information input box corresponding to the search result to be combined is output and displayed.
And A12, acquiring the combined service information.
In this embodiment, the user obtains the combined service information input by voice instruction, typing or selecting mode based on the combined service information input box.
A20, determining or generating new target retrieval information according to the combined service information and the search results to be combined, and returning to execute the step S10.
In this embodiment, the combined service information is used as new service information, and the address information in the search result to be combined is used as new address information, and the step S10 is executed in a return manner.
In one implementation manner, the service type is a nearby location search, the number of address information is 1, and the step S10 includes: and in a preset map, searching the location in a circular area with the address information as a circle center and a preset searching radius as a radius to obtain a searching result.
In one implementation manner, the service type is path planning, the address information includes start address information and end address information, and the step S10 includes: and in a preset map, carrying out path searching from the starting point address information to the destination address information to obtain a searching result, wherein the path searching comprises shortest path searching, the most unblocked path searching and traffic light least path searching.
In one implementation manner, the service type is path planning, the number of the address information exceeds 2, and the step S10 includes: and determining starting point address information, end point address information and route point address information from the address information, and performing path searching from the starting point address information to the end point address information and route point address information in a preset map to obtain a searching result, wherein the path searching comprises shortest path searching, the most unblocked path searching and traffic light least path searching.
In one implementation manner, the service type is ground scene matching, the number of the address information is 0, and the step S10 includes: and obtaining a ground scene image to be searched, and carrying out ground scene matching search on the image to be searched in a preset map to obtain a search result.
In one implementation manner, the service type is ground scene matching, the number of address information is at least one, and the step S30 includes: acquiring an image to be searched; in a preset map, determining a plurality of circular areas taking each piece of address information as a circle center and taking a preset searching radius as a radius; and in each circular area, performing ground scene matching search on the image to be searched to obtain a search result.
In this embodiment, the information processing method includes the steps of: acquiring a voice instruction; searching in a preset map in response to obtaining at least two target retrieval information; and simultaneously displaying the search results corresponding to the target search information on a map. Through the technical scheme, a user can input two or more than two search requirements at the same time, when the map application simultaneously acquires two or more than two target search information representing the search requirements of the user, the map application responds to acquiring at least two target search information, searches on a preset map at the same time, and simultaneously displays the search results corresponding to the target search information on the map, so that the user can intuitively know the search results corresponding to each search requirement on the map at the same time, the situation that after searching for single target search information, the information such as distance, position and the like cannot be intuitively compared is effectively avoided, and the situation that the comparison can only be carried out in mind through memorizing and imagining is solved, the problem that the convenience of searching by using the map application based on multiple addresses or multiple services is poor is solved, the decision making efficiency and accuracy of the user are improved, convenience is provided for the user, and user experience is further improved.
Second embodiment
Referring to fig. 4, fig. 4 is a flowchart illustrating a second embodiment of the information processing method of the present application. In this embodiment, the step S20 includes the steps of:
s21: acquiring or determining at least two target map areas according to the retrieval results corresponding to the at least two target retrieval information;
in this embodiment, after obtaining the search results corresponding to the target search information, positioning information of each search result is determined, and target map areas corresponding to the search results are determined according to each positioning information, where each target map area includes one piece of positioning information, and a preset distance range from the positioning information may be determined as a target map area, for example.
S22: and adjusting the at least two target map areas to be in a preset display area.
In this embodiment, it should be noted that, the preset display area refers to an area on the intelligent terminal for displaying a map, and in an implementation manner, the display area may be a display interface of a map-related application.
Optionally, scaling adjustment is performed on the image of the map according to the position information of each target map area, so that each target map area is in a preset display area, and each search result is marked and displayed in the target map area, so that a user can intuitively and conveniently view all the search results comprehensively without manual adjustment.
Optionally, the distance between at least one target map area and the boundary of the preset display area is smaller than a preset distance threshold.
In this embodiment, when the zoom adjustment is performed on the image of the map, not only the target map area is located in the preset display area, but also the distance between at least one target map area and the boundary of the preset display area may be smaller than a preset distance threshold. Therefore, under the condition of ensuring the display integrity of the target map area, each search result can be displayed in an enlarged mode as much as possible, and the user can comprehensively and clearly know the information in the target map area.
In this embodiment, the target map area to be displayed is determined according to the positioning information of all the search results, and then the target map area is scaled to the maximum size capable of displaying all the search results, so that the user can view all the search results in the same picture, and can know the specific map information most clearly, thereby improving the user experience.
Third embodiment
In this embodiment, before the step S11, the method further includes:
responding to a voice instruction of a user, and confirming or acquiring at least one address information and at least one service information according to the voice instruction of the user;
In this embodiment, it should be noted that, due to the screen size of the intelligent terminal, the map functions in various applications are relatively single at present, which results in that the user inputs an address and views the operation of the result quite inconvenient and easy to cause misoperation, and the address inputs in various applications at present are all limited to the current position through automatic positioning or manual input, the manual input takes a long time, as the number of address information required to be input increases, the input frame is smaller and more densely distributed, the functions of the functional application are more, the display content on the screen is more and more complex, and the inconvenience of the user operation and the increase of misoperation can be caused.
The voice information of the user can be continuously collected in the running process of the intelligent terminal or in the running process of the map related application, and the collected voice information can be directly used as a voice instruction; triggering an input process of a voice instruction by detecting a specific vocabulary from the voice information; the voice command input process may also be triggered by detecting relevant operation information of the user based on the intelligent terminal, for example, triggering the voice command input process when a click operation of a voice input button is detected, triggering the voice command input process when a voice input gesture operation is detected, and the like.
Optionally, in the inputting process of the voice instruction, the voice information of the user is collected, the voice information can be directly used as the voice instruction, or after the voice information is preprocessed, part of the voice information is extracted to be used as the voice instruction, at least one address information and at least one service information are extracted from the voice instruction of the user in response to the voice instruction of the user, optionally, the preprocessing comprises noise reduction processing, effective information extraction processing and the like, the effective information extraction processing can determine and retain effective parts in the voice information through semantic analysis, noise or ineffective parts can be filtered, and a target voice instruction template which can be matched with the voice information in the preset voice instruction template can be determined to be the voice instruction through matching with the preset voice instruction template.
In the searching process, one or more voice instructions may be acquired, where the voice instructions may include address information and/or service information, for example, a first voice instruction input based on an address input box may include only address information, and a second voice instruction input based on a service input box may include only service information, and for example, the voice instructions may be a semantic ambiguous sentence, such as "find movie theatre near A1 restaurant with B1 meter" and "find movie theatre" where address information "includes B1 meter near A1 restaurant.
Extracting address information and/or service information from the voice instruction, and outputting prompt information to remind a user to re-input the address information and/or service information if the address information and/or service information cannot be extracted from the voice instruction; if an address information and a service information are extracted from the voice command, step S11 may be performed; if more than one address information and/or more than one service information are extracted from the voice command, the corresponding relation between the address information and the service information can be determined through semantic analysis, the address information and/or the service information can be respectively output and displayed, so that a user can manually adjust or set the corresponding relation between the address information and the service information, the corresponding relation between the address information and the service information can be determined according to whether voice commands corresponding to the address information and the service information are matched, for example, as shown in fig. 5, three groups of search information are contained in a display interface, the user clicks an address input box 1 in a first group, the address information of the first group can be input through the voice command, the user clicks a service input box 2 in the first group, the service information of the first group can be input through the voice command, the user clicks the address input box 1 in a second group, the user can click the address information of the second group through the voice command, the user can input the service information of the second group through the voice command, the address information of the first group can be input through the voice command, and the service information of the first group can be input through the address information of the first group through the voice command.
In one embodiment, the step of obtaining the voice command of the user includes:
the method comprises the steps of obtaining a first voice instruction and a second voice instruction.
Optionally, a first voice command input by the user based on the address input box and a second voice command input by the user based on the service input box are acquired.
In this embodiment, when a click operation of an address input box is detected, a voice command input process is triggered, first voice information is collected, effective information is extracted from the first voice information, a first voice command is generated, the first voice command should include at least one address information, and in an implementation manner, if the address information is not identified from the first voice command, first prompt information can be output to remind a user to input the first voice command again; when the clicking operation of the service input box is detected, triggering a voice instruction input process, collecting second voice information, extracting effective information from the second voice information, and generating a second voice instruction, wherein the second voice instruction comprises at least one service information.
In one embodiment, before the step of obtaining the first voice command and the second voice command, the method further includes: and hiding the address input box and the service input box, and outputting and displaying the address input box and the service input box when triggering operations of the address input box and the service input box are detected, wherein the triggering operations of the address input box and the service input box can be clicking operations of a search button, gesture operations of voice input and the like. For example, referring to fig. 5 and 6, fig. 5 is a schematic diagram of a display interface of an address input box and a service input box, fig. 6 is a schematic diagram of a display interface of an address input box and a service input box after the address input box 1 and the service input box 2 are hidden, in fig. 6, after the address input box 1 and the service input box 2 are hidden, a search button 3 is generated on the display interface, when a click operation of the search button 3 is detected, the address input box 1 and the service input box 2 can be displayed in the display interface, at this time, the display interface is as shown in fig. 5, the address input box 1 and the service input box 2 can be not displayed, and the color, the pattern and other appearances of the search button can be adjusted to prompt a user to be in the process of inputting voice instructions currently so that the user can directly input the voice instructions according to the prompt.
In this embodiment, through the above technical solution, a user may input address information and/or service information through a voice command, compared with a manual clicking or typing input manner, the speed of inputting the voice command is faster, the efficiency is higher, and, along with the increase of the address information and/or service information to be input, compared with a manner of acquiring each address information and each service information through an input box, on one hand, the screen space required for inputting the voice command is greatly reduced, so that the space for displaying the search result on the screen can be increased, so that the user can more clearly view the search result, on the other hand, the misoperation possibly caused by clicking or typing on the screen by the user can be reduced, further, the repeated operation required to be executed by the user to compensate for the misoperation can be reduced, the convenience of searching by using the map application is jointly improved from multiple aspects, the problem of poor convenience of searching by using the map application based on multiple addresses or multiple services is solved, and further the user experience is improved.
Fourth embodiment
Referring to fig. 7, fig. 7 is a flowchart of a fourth embodiment of the information processing method of the present application. In this embodiment, the step S11 includes the steps of:
S111: determining a corresponding relation between each piece of service information and each piece of address information in response to acquiring at least one piece of address information and at least one piece of service information;
in this embodiment, since the specific search operation may be different for different service information, the number of service information may be detected first, in order to avoid an error, if multiple service information and/or multiple address information are detected, the correspondence between each service information and each address information needs to be determined first, alternatively, the correspondence between each address information and each service information may be determined through semantic analysis, and each address information and/or each service information may be output and displayed separately, so that a user may manually adjust or set the correspondence between each address information and each service information, or determine the correspondence between each address information and each service information according to whether voice commands corresponding to each address information and each service information match.
If the service type is a nearby location search or a ground scene match, each piece of service information may correspond to one or more pieces of address information, and each piece of address information may correspond to one or more pieces of service information, but each corresponding relation refers to a corresponding relation between one piece of service information and one piece of address information, and further, corresponding target search information is determined or generated based on each corresponding relation, where each piece of target search information only includes one piece of service information and one piece of address information. That is, different service information may correspond to the same address information, for example, search for restaurants and movie theatres near the a-place, alternatively, the address information is a near-a-place area, the service information is a near-a-restaurant search and a near-movie theatre search, but two items of target retrieval information are determined or generated at this time, namely, search for restaurants near the a-place and search for movie theatres near the a-place, respectively; the different address information may also correspond to the same service information, for example, search for restaurants near a and B, alternatively, the address information is a near a and B area, and the service information is a near restaurant search, but two pieces of target search information are determined or generated at this time, that is, search for restaurants near a and search for restaurants near B, respectively.
If the service type is path planning, each corresponding relation refers to a corresponding relation between one service information and a plurality of address information, and then corresponding target retrieval information is determined or generated based on each corresponding relation, wherein each target retrieval information at this time comprises one service information and a plurality of address information.
S112: and determining or generating target retrieval information corresponding to each corresponding relation.
In this embodiment, the target search information corresponding to each of the correspondence relationships is determined or generated, respectively.
In one implementation, after step S112, the method further includes:
and outputting and displaying each search result.
Optionally, all the search results, whether corresponding to the same target search information or not, can be simultaneously output and displayed in the same display area, so that a user can intuitively compare the search results and plan an overall plan; the output display may also be performed in groups.
Optionally, after the step S112, the method further includes:
and displaying (such as grouping display) each search result in a preset display area.
In this embodiment, the search results are displayed in groups according to the respective corresponding target search information, for example, search results corresponding to the same target search information are listed in the same result list, search results corresponding to different target search information are displayed in different result lists, and when one of the result lists is displayed, other result lists may be folded and hidden.
In this embodiment, since the user often plans different trips or has different demands at different places, and the multiple trips or the multiple demands may be related to each other, for example, may plan to run at the place a, eat at the place B, watch a movie at the place C, then may need to search for a park or a gym near the place a, search for a restaurant near the place B, search for a movie theater near the place C, so may need to set multiple service information, and for the case of multiple service information, the search result needs to be searched for each place individually, the search result is also displayed individually, the user needs to search for a restaurant near the place a by virtue of memorizing, for example, the user needs to finish running at the gym near the place a, search for a restaurant near the place B, find that the restaurant is very close to the entrance, but may not remember whether a subway station is near the gym and the distance between subway stations, and need to review the map near the gym again, the operation is cumbersome and inconvenient, the embodiment may be able to determine that each service information corresponds to each, but the search result is accomplished independently, and the user can search for the user can see the result more intuitively, and the user can see the result.
The embodiment of the application also provides an intelligent terminal, which comprises a memory and a processor, wherein the memory is stored with an information processing method program, and the information processing method program is executed by the processor to realize the steps of the information processing method in any embodiment.
The embodiment of the present application further provides a storage medium, on which an information processing method program is stored, which when executed by a processor, implements the steps of the information processing method in any of the above embodiments.
The embodiments of the intelligent terminal and the storage medium provided in the present application may include all technical features of any one of the embodiments of the information processing method, and the expansion and explanation contents of the description are substantially the same as those of each embodiment of the method, which are not repeated herein.
The present embodiments also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method in the various possible implementations as above.
The embodiments also provide a chip including a memory for storing a computer program and a processor for calling and running the computer program from the memory, so that a device on which the chip is mounted performs the method in the above possible embodiments.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a storage medium or transmitted from one storage medium to another storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.) means. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, memory disks, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state storage disk So l i d State Di sk (SSD)), or the like.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. An information processing method, characterized by comprising the steps of:
s10, searching in a preset map in response to obtaining at least two target retrieval information;
and S20, simultaneously displaying the search results corresponding to the target search information on a map.
2. The method of claim 1, wherein the step S20 comprises:
acquiring or determining at least two target map areas according to the retrieval results corresponding to the at least two target retrieval information;
and adjusting the at least two target map areas to be in a preset display area.
3. The method of claim 2, wherein a distance of at least one of the target map areas from the preset display area boundary is less than a preset distance threshold.
4. A method according to any one of claims 1 to 3, wherein said S10 step comprises the steps of:
S11: determining or generating at least two target retrieval information in response to acquiring the at least one address information and the at least one service information;
s12: searching in a preset map based on the target retrieval information.
5. The method of claim 4, wherein the service information includes at least one of a service type and a service object; and/or the service type comprises at least one of a nearby location search, a path planning and a ground scene matching.
6. The method of claim 4, further comprising, prior to step S11:
and in response to acquiring the voice instruction of the user, confirming or acquiring at least one address information and at least one service information according to the voice instruction of the user.
7. The method of claim 4, wherein the step S11 comprises:
determining a corresponding relation between each piece of service information and each piece of address information in response to acquiring at least one piece of address information and at least one piece of service information;
and determining or generating target retrieval information according to the corresponding relation between the service information and the address information.
8. A method according to any one of claims 1 to 3, further comprising, after step S20:
Acquiring combined service information in response to a selection operation of selecting at least one search result to be combined from the search results;
and determining or generating new target retrieval information according to the combination service information and each search result to be combined, and returning to execute the step S10.
9. An intelligent terminal, characterized by comprising: a memory, a processor, on which an information processing method program is stored, which when executed by the processor, implements the steps of the information processing method according to any one of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the information processing method according to any one of claims 1 to 8.
CN202310094717.6A 2023-01-31 2023-01-31 Information processing method, intelligent terminal and storage medium Pending CN116069887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310094717.6A CN116069887A (en) 2023-01-31 2023-01-31 Information processing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310094717.6A CN116069887A (en) 2023-01-31 2023-01-31 Information processing method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116069887A true CN116069887A (en) 2023-05-05

Family

ID=86169604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310094717.6A Pending CN116069887A (en) 2023-01-31 2023-01-31 Information processing method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116069887A (en)

Similar Documents

Publication Publication Date Title
CN108334539B (en) Object recommendation method, mobile terminal and computer-readable storage medium
CN108241752B (en) Photo display method, mobile terminal and computer readable storage medium
CN108897846B (en) Information searching method, apparatus and computer readable storage medium
CN112306366A (en) Operation method, mobile terminal and storage medium
CN112363648A (en) Shortcut display method, terminal and computer storage medium
CN113704631A (en) Interactive instruction prompting method, intelligent device and readable storage medium
CN112068912A (en) System language type switching method, terminal and computer storage medium
CN114761926A (en) Information acquisition method, terminal and computer storage medium
CN109542311B (en) File processing method and electronic equipment
CN108322611B (en) Screen locking information pushing method and device and computer readable storage medium
CN111061530A (en) Image processing method, electronic device and storage medium
CN114090120A (en) Application program starting method, mobile terminal and storage medium
CN112543248A (en) Air-separating operation method, terminal and storage medium
CN109740121B (en) Search method of mobile terminal, mobile terminal and storage medium
CN115993926A (en) Interaction method, intelligent terminal and storage medium
CN114138144A (en) Control method, intelligent terminal and storage medium
CN116069887A (en) Information processing method, intelligent terminal and storage medium
CN109600512B (en) Status bar interaction regulation and control method, equipment and computer readable storage medium
CN113253892A (en) Data sharing method, terminal and storage medium
CN107767504B (en) Vehicle unlocking method, terminal and computer-readable storage medium
CN112083872A (en) Picture processing method, mobile terminal and computer storage medium
CN111399710B (en) Associated touch method, associated touch equipment and computer readable storage medium
CN116521994A (en) Searching method, intelligent terminal and storage medium
CN116364086A (en) Speech recognition method, intelligent terminal and storage medium
CN117201884A (en) Information acquisition method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication