WO2019153925A1 - 一种搜索方法及相关装置 - Google Patents

一种搜索方法及相关装置 Download PDF

Info

Publication number
WO2019153925A1
WO2019153925A1 PCT/CN2018/123766 CN2018123766W WO2019153925A1 WO 2019153925 A1 WO2019153925 A1 WO 2019153925A1 CN 2018123766 W CN2018123766 W CN 2018123766W WO 2019153925 A1 WO2019153925 A1 WO 2019153925A1
Authority
WO
WIPO (PCT)
Prior art keywords
search
user
text data
virtual
search result
Prior art date
Application number
PCT/CN2018/123766
Other languages
English (en)
French (fr)
Inventor
高爽
余浩
张婷婷
冯科
赵博
张龙
刘虎
王勋
Original Assignee
北京搜狗科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京搜狗科技发展有限公司 filed Critical 北京搜狗科技发展有限公司
Publication of WO2019153925A1 publication Critical patent/WO2019153925A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of information technology, and in particular, to a search method and related apparatus.
  • the user inputs the search content in the form of text in the search input box of the search engine, and the search engine detects multiple search results corresponding to the search content, and displays the text list.
  • the technical problem to be solved by the present application is to provide a search method and related device, which can integrate the Augmented Reality (AR) technology into the search process, which not only improves the search efficiency, but also enhances the sense of the real scene and improves the user experience. .
  • AR Augmented Reality
  • the embodiment of the present application provides a search method, including:
  • the text data corresponding to the search result is displayed by using AR technology, and/or the voice data converted by the search result is output.
  • the method before the camera device is started to capture a real scene and generate a virtual object, the method further includes:
  • a search request of a user is received, the search request being generated according to a user operation of the AR search key.
  • the method before acquiring the voice data input by the user, the method further includes:
  • Launching an AR application displaying an AR search key using an AR technology in the user interface displaying the real scene and the virtual object; receiving a search request of a user, the search request being based on a user search key of the AR search key Generated by operation.
  • the virtual object includes: a virtual key or virtual prompt information
  • the virtual opening key is used to enable or end the voice input function according to the triggering operation of the user, and the virtual prompt information is used to prompt the user that the voice input function is in an open or end state;
  • Get the voice data input by the user including:
  • the virtual object includes: a virtual cartoon object, where the displaying, by using the AR technology, the text data corresponding to the search result, the method further includes:
  • the virtual cartoon object displayed in the user interface is presented in an animated image that answers the question.
  • the method before the displaying, by using the AR technology, the text data corresponding to the search result, the method further includes:
  • the virtual cartoon object displayed in the user interface is presented in an animated image of a questioning question.
  • the method further includes: obtaining user information;
  • the sending the text data to the search server includes: transmitting the text data and the user information to a search server;
  • Receiving the search result corresponding to the text data returned by the search server includes: receiving the text data returned by the search server and a search result corresponding to the user information.
  • displaying the text data corresponding to the search result by using an AR technology including:
  • the AR data is used to display text data corresponding to the search result in a plurality of screen scrolling manners, wherein text data corresponding to one search result is displayed in each screen.
  • it also includes:
  • it also includes:
  • the recorded video includes the real-time scene and the virtual cartoon object that are dynamically changed, and the voice data input by the user, the text data corresponding to the search result, and the Any one or more of the voice data into which the search result is converted.
  • the application provides a search device, including: a startup module, a generation module, a display module, a conversion module, a sending module, and a receiving module;
  • the startup module is configured to start a camera to capture a real scene
  • the generating module is configured to generate a virtual object
  • the display module is configured to display the real scene and the virtual object in a user interface by using an augmented reality AR technology
  • the conversion module is configured to acquire voice data input by a user, and convert the voice data into corresponding text data;
  • the sending module is configured to send the text data to a search server
  • the receiving module is configured to receive a search result corresponding to the text data returned by the search server;
  • the display module is further configured to display text data corresponding to the search result by using an AR technology in the user interface that displays the real scene and the virtual object, and/or the device further includes an output module. And for outputting the voice data into which the search result is converted.
  • the startup module is further configured to start a search application or search a webpage
  • the display module is further configured to display an AR search key in the user interface
  • the receiving module is further configured to receive a search request of a user, where the search request is generated according to an operation of the AR search key by a user.
  • the startup module is further configured to start an AR application, where the display module is further configured to display an AR search key by using an AR technology in the user interface that displays the real scene and the virtual object;
  • the receiving module is further configured to receive a search request of a user, and the search request is generated according to a user operation of the AR search key.
  • the virtual object includes: a virtual key or virtual prompt information
  • the virtual opening key is used to enable or end the voice input function according to the triggering operation of the user, and the virtual prompt information is used to prompt the user that the voice input function is in an open or end state;
  • the conversion module is specifically configured to acquire voice data input by the user when the voice input function is in an open state.
  • the virtual object includes: a virtual cartoon object
  • the display module is further configured to: when the text data corresponding to the search result is displayed by using an AR technology, the virtual cartoon object to be displayed in the user interface An animated image that answers the question.
  • the display module is further configured to display the virtual cartoon object displayed in the user interface in an animated image of a thinking question before displaying the text data corresponding to the search result.
  • it also includes:
  • a first obtaining module configured to acquire user information
  • the sending module is specifically configured to send the text data and the user information to a search server;
  • the receiving module is specifically configured to receive the text data returned by the search server and the search result corresponding to the user information.
  • the display module when displaying the text data corresponding to the search result by using the AR technology, is specifically configured to display, by using an AR technology, text data corresponding to the search result in a plurality of screen scrolling manners, wherein each A text data corresponding to a search result is displayed on the screen.
  • the second acquiring module is configured to acquire a screen capture image of the display content of the user interface, where the real-time scene, the virtual cartoon object, and the text data corresponding to the search result are displayed.
  • the third acquiring module is configured to acquire a recorded video of the display content of the user interface, where the recorded video includes the dynamically changed real scene and the virtual cartoon object, and the user input is also recorded.
  • An embodiment of the present application provides an apparatus for searching, including a memory, and one or more programs, wherein one or more programs are stored in a memory and configured to be executed by one or more processors
  • the one or more programs include instructions for performing the following operations:
  • the text data corresponding to the search result is displayed by using AR technology, and/or the voice data converted by the search result is output.
  • the embodiments of the present application also provide a machine readable medium having stored thereon instructions that, when executed by one or more processors, cause the apparatus to perform any of the method embodiments described above.
  • the camera device is activated to capture a real scene and generate a virtual object; the real scene and the virtual object are displayed in the same user interface by using an AR technology; and the voice data input by the user is acquired.
  • Converting to corresponding text data sending to a search server and obtaining a search result of the search server for the text data, displaying the search result corresponding to the search result by using AR technology in the user interface displaying the real scene and the virtual object Text data, and/or outputting voice data into which the search results are converted.
  • the AR technology is integrated into the search process, and the real scene and the virtual object are displayed in the same user interface by using the AR technology, and the voice interaction with the virtual object is simulated, and the search result is displayed through the AR technology or the voice mode. Therefore, the embodiment of the present invention can not only obtain a plurality of search results more intuitively for viewing, thereby improving search efficiency, and enhances the user experience by enhancing the realistic scene feeling through the voice interaction with the virtual object.
  • FIG. 1 is a schematic flowchart diagram of an embodiment of a method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure
  • FIG. 3 is a block diagram of an apparatus 300 for searching, according to an exemplary embodiment
  • FIG. 4 is a schematic structural diagram of a server in an embodiment of the present invention.
  • the search engine detects multiple search results corresponding to the search content, and displays the text list. For example, if the user inputs "weather” in the search input box, the search engine retrieves multiple search results corresponding to "weather”, such as "Beijing weather condition: XX”, "Shanghai weather condition: XX”, etc., and through the text list The way to display.
  • the present application provides a search method and related device, which can integrate the AR technology into the search process, so that the user can more intuitively obtain the search result for viewing, improve the search efficiency, and pass the virtual object.
  • Voice interaction enhances the sense of reality and improves the user experience.
  • the AR technology is a technique of superimposing virtual objects such as corresponding images, videos, and 3D models in a real scene captured by a camera.
  • the goal of this technique is to place virtual objects on a screen and interact with them in a real scene.
  • a number of AR-related products have emerged, such as using AR technology to perform virtual pet development games in real-world scenarios, or using AR technology to find virtual red packets in real-life scenarios.
  • users often interact by clicking on the screen. For example, the user performs a corresponding action by clicking a corresponding virtual button on the screen. In this way, the interaction between the user and the virtual pet in the real scene cannot be simulated. For example, the user cannot make a voice conversation with the virtual pet, which reduces the user experience.
  • the embodiment of the present invention adopts an interaction mode closer to a real scene, thereby enhancing the user experience.
  • the technical solutions of the embodiments of the present invention are specifically described below.
  • an embodiment of the present invention provides an embodiment of a method for searching.
  • This embodiment may be specifically used for a terminal device (also referred to as a user device), and the common terminal device includes, for example, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a mobile internet device, and a wearable device (for example, a head mounted device).
  • a terminal device also referred to as a user device
  • the common terminal device includes, for example, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a mobile internet device, and a wearable device (for example, a head mounted device).
  • PDA Personal Digital Assistant
  • S101 Start the camera to capture a real scene and generate a virtual object.
  • the image pickup device refers to a device having a photographing and/or recording function.
  • the camera device can be built in the terminal device.
  • the camera device can be a built-in camera of the mobile phone; the camera device can also be an external device, for example, the camera device is a camera on an external headset device, and the camera
  • the wearing device and the terminal device are connected through a communication network, and the terminal device starts the camera on the wearing device by sending an opening command to the head mounted device.
  • a virtual object refers to a virtualized object for voice interaction with a user.
  • the virtual object may include any one or more of a virtual key, a virtual prompt information, and a virtual cartoon image. The following is an example.
  • the virtual object may include virtual keying and/or virtual prompt information.
  • the virtual switch key is used to enable or end the voice input function according to the trigger operation of the user.
  • the virtual key is a virtual button displayed on the user interface, and the user turns on the voice input function by clicking a virtual button displayed on the user interface.
  • the user can input voice data, and the user can turn off the voice input function by clicking the virtual button again, or can automatically turn off the voice input function after the preset time (you can prompt the remaining time by displaying the time progress bar, countdown, etc.), or
  • the voice input function can be turned off after the user has automatically detected that the input voice data has been completed.
  • the virtual prompt information is used to prompt the user that the voice input function is on or off.
  • the terminal device turns on the voice input function and displays virtual information on the user interface: “the voice input function is enabled, the voice can be input”, or the remaining time is indicated by displaying a time progress bar, a countdown, and the like.
  • the user can input voice data, and after the preset time or after detecting the user operation, the voice input function is turned off.
  • the virtual object may include a virtual cartoon image
  • the virtual cartoon image is used to simulate a question and answer process with the user.
  • the animation effect of the virtual cartoon image can be triggered by touch or the like, or the image of the virtual cartoon image can be selected and replaced.
  • the search process based on the AR technology in the embodiment of the present invention may be switched to by using the corresponding AR search key in the traditional character search process.
  • the AR search function may be set in a search application (ie, an application having a search function) or a search web page.
  • a search application ie, an application having a search function
  • searching for a web page a search input box and an AR search key are displayed in the user interface, and the search input box is used to input search content in text form, and the user is supported based on Text to search.
  • the AR search key is triggered, and the terminal device receives the user's search request, wherein the search request is generated according to the user's operation on the AR search key, according to the user's search request.
  • the camera device is activated to capture a real scene and generate a virtual object.
  • the search function may be embedded in the AR-related application, and the AR-based search in the embodiment of the present invention is implemented by setting a corresponding AR search key in the AR application.
  • the AR application can be opened, and the AR search key can be displayed using the AR technology in the user interface displaying the real scene and the virtual object.
  • the AR search key is triggered when the user wants to implement an AR technology based search.
  • the terminal device receives a search request of the user, wherein the search request is generated according to an operation of the AR search key by the user, and the voice data is acquired according to the search request of the user.
  • S102 Display the real scene and the virtual object in a user interface by using an AR technology.
  • the virtual object may include a virtual key and a virtual cartoon image, and the real scene, the virtual key and the virtual cartoon image captured by the camera in real time are superimposed and displayed in the same user interface.
  • S103 Acquire voice data input by the user, and convert the voice data into corresponding text data.
  • the input keyboard is not required to be displayed on the screen and the search content is input by clicking the screen, etc., but the interaction between the user and the virtual object in the real scene is simulated through voice input, thereby enhancing the user experience.
  • voice data can be used to describe the problem raised by the user.
  • the voice data input by the user can be obtained when the voice input function is turned on.
  • the user can turn on the voice input function by clicking the virtual button displayed on the user interface.
  • the user can input voice data, such as “Beijing Weather”.
  • the user can close the virtual button by clicking the virtual button again.
  • the voice input function, or the voice input function can be automatically turned off after a preset time (the time remaining by displaying a time progress bar, a countdown, etc.), or the voice input function can be turned off after the user completes the input voice data automatically.
  • the voice data input by the user when the voice input function is turned on is obtained and converted into corresponding text data.
  • the voice input function can be automatically turned on. For example, after entering the search application, the voice input function is turned on. When the user inputs the voice data, the voice data is acquired. At this time, the virtual button can be clicked. Turn off the voice input function, or you can turn off the voice input function automatically after the preset time (you can prompt the remaining time by displaying the time progress bar, countdown, etc.), or you can automatically detect the voice data input by the user after completing the input voice data. .
  • the voice data input by the user may be acquired by using a microphone in the terminal device when the voice input function is enabled.
  • S104 Send the text data to a search server, and receive a search result corresponding to the text data returned by the search server.
  • the search server may specifically be a server for performing data search in a search engine.
  • the search server can search for corresponding search results based on the text data. If the voice data input by the user is used to describe the question posed by the user, the search result returned by the search server can be used to reply to the question posed by the user.
  • the text data converted by the user input voice data is: "Beijing weather”
  • the search server searches for the corresponding search result: "Beijing weather: the lowest temperature is -7 degrees, the highest temperature is 5 degrees, the cloudy, the northwest wind is 2, Feeling a bit cold, outdoor activities.”
  • S105 display, in the user interface that displays the real scene and the virtual object, text data corresponding to the search result by using AR technology, and/or output voice data converted by the search result.
  • a plurality of output manners of the search results are provided.
  • the text data corresponding to the search result may be superimposed and displayed on the user interface by using the AR technology.
  • the virtual cartoon image can simulate a question and answer process with the user, so when superimposing the text data corresponding to the search result, the virtual cartoon object can be displayed in an animated image of the answering question, such as virtual The cartoon image shows the animated image being spoken.
  • the virtual cartoon image may be displayed in an animated image of the thinking question.
  • the search result can be converted into voice data and output, that is, the voice data is played, and the interaction between the user and the virtual object in the real scene is further simulated.
  • the virtual cartoon image can also be displayed in an animated image that answers the question.
  • the text data corresponding to the search result is superimposed and displayed, and the voice data into which the search result is converted is simultaneously output.
  • the prompt text data may be generated for prompting the search to fail, and at the display In the user scene of the real scene and the virtual object, the prompt text data is displayed by using AR technology, and/or the voice data converted by the prompt text data is output.
  • the AR technology is integrated into the search process, and the real scene and the virtual object are displayed in the same user interface by using the AR technology, and the voice interaction with the virtual object is simulated, and the search result is displayed through the AR technology or the voice mode. Therefore, the embodiment of the present invention can not only obtain a plurality of search results more intuitively for viewing, thereby improving search efficiency, and enhances the user experience by enhancing the realistic scene feeling through the voice interaction with the virtual object. It should be emphasized that the embodiment of the present invention does not need to display the input keyboard on the screen and input the search content by clicking the screen, etc., but simulates the interaction between the user and the virtual object in the real scene through the voice input, thereby enhancing the user experience. .
  • the search result when searching by the search server, the search result may be further filtered by the user information.
  • the method further includes: acquiring user information; the sending the text data to the search server, comprising: sending the text data and the user information to a search server; and receiving the text data returned by the search server
  • Corresponding search results include: receiving the text data returned by the search server and the search result corresponding to the user information.
  • the user information may be a user identifier, user attribute information (such as user location information, user gender, user preference, etc.), user history behavior information (such as user history search information), and the like.
  • user attribute information such as user location information, user gender, user preference, etc.
  • user history behavior information such as user history search information
  • the user inputs voice data and converts it into text data: "Today's Weather" and acquires the user's location information: Beijing, and sends the location information and text data to the search server for association search.
  • only one search result may be output to the user.
  • the search server searches for the search with the highest matching degree when searching for the search result corresponding to the text data.
  • the one search result is returned to the terminal device, or the terminal device receives the plurality of search results returned by the search server, filters the plurality of search results, and outputs the search result with the highest matching degree after the screening.
  • displaying the text data corresponding to the search result by using the AR technology includes: displaying, by using an AR technology, text data corresponding to the search result in a manner of multiple screen scrolling, wherein each screen displays a search result corresponding to the search result. text data.
  • the search result is multiple and played by voice, the voice data of the search result can be processed in a natural language and played in the form of a natural language dialogue.
  • the search process may be photographed and/or recorded, and the captured image and/or the video obtained by the video may be saved, shared, and the like. The following are explained separately.
  • the process of photographing the search process may include: acquiring a screen capture image of the display content of the user interface, wherein if the screen capture is performed when the search result is displayed in text form, the virtual scene is displayed in the screen capture image, the virtual cartoon The text data corresponding to the object and the search result.
  • the process of recording the search process may include: acquiring a recorded video of the display content of the user interface, wherein the recorded video records the dynamically changed real scene and the virtual cartoon object, and records the voice data input by the user. And any one or more of the text data corresponding to the search result and the voice data converted by the search result.
  • the screen capture video records a dynamically changing real scene and a virtual cartoon object, voice data input by the user, text data corresponding to the displayed search result, and voice data converted by the search result. .
  • the present application also provides corresponding device embodiments, which are specifically described below.
  • the present application provides an apparatus embodiment of a search apparatus, including: a startup module 201, a generation module 202, a display module 203, a conversion module 204, a sending module 205, and a receiving module 206.
  • the startup module 201 is configured to start the camera to capture a real scene.
  • the generating module 202 is configured to generate a virtual object.
  • the display module 203 is configured to display the real scene and the virtual object in a user interface by using an augmented reality AR technology.
  • the conversion module 204 is configured to acquire voice data input by a user, and convert the voice data into corresponding text data.
  • the sending module 205 is configured to send the text data to a search server.
  • the receiving module 206 is configured to receive a search result corresponding to the text data returned by the search server.
  • the display module 203 is further configured to display text data corresponding to the search result by using an AR technology in the user interface that displays the real scene and the virtual object, and/or the device further includes an output.
  • the module 207 is configured to output voice data into which the search result is converted.
  • the startup module is further configured to start a search application or search a webpage
  • the display module is further configured to display an AR search key in the user interface
  • the receiving module is further configured to receive a search request of a user, where the search request is generated according to an operation of the AR search key by a user.
  • the startup module is further configured to start an AR application, where the display module is further configured to display an AR search key by using an AR technology in the user interface that displays the real scene and the virtual object;
  • the receiving module is further configured to receive a search request of a user, and the search request is generated according to a user operation of the AR search key.
  • the virtual object includes: a virtual key or virtual prompt information
  • the virtual opening key is used to enable or end the voice input function according to the triggering operation of the user, and the virtual prompt information is used to prompt the user that the voice input function is in an open or end state;
  • the conversion module is specifically configured to acquire voice data input by the user when the voice input function is in an open state.
  • the virtual object includes: a virtual cartoon object
  • the display module is further configured to: when the text data corresponding to the search result is displayed by using an AR technology, the virtual cartoon object to be displayed in the user interface An animated image that answers the question.
  • the display module is further configured to display the virtual cartoon object displayed in the user interface in an animated image of a thinking question before displaying the text data corresponding to the search result.
  • it also includes:
  • a first obtaining module configured to acquire user information
  • the sending module is specifically configured to send the text data and the user information to a search server;
  • the receiving module is specifically configured to receive the text data returned by the search server and the search result corresponding to the user information.
  • the display module when displaying the text data corresponding to the search result by using the AR technology, is specifically configured to display, by using an AR technology, text data corresponding to the search result in a plurality of screen scrolling manners, wherein each A text data corresponding to a search result is displayed on the screen.
  • the second acquiring module is configured to acquire a screen capture image of the display content of the user interface, where the real-time scene, the virtual cartoon object, and the text data corresponding to the search result are displayed.
  • the third acquiring module is configured to acquire a recorded video of the display content of the user interface, where the recorded video includes the dynamically changed real scene and the virtual cartoon object, and the user input is also recorded.
  • FIG. 3 is a block diagram of an apparatus 300 for searching, according to an exemplary embodiment.
  • device 300 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • apparatus 300 can include one or more of the following components: processing component 302, memory 304, power component 306, multimedia component 308, audio component 310, input/output (I/O) interface 312, sensor component 314, And a communication component 316.
  • Processing component 302 typically controls the overall operation of device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 302 can include one or more processors 320 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 302 can include one or more modules to facilitate interaction between component 302 and other components.
  • processing component 302 can include a multimedia module to facilitate interaction between multimedia component 308 and processing component 302.
  • Memory 304 is configured to store various types of data to support operation at device 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 304 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Electrically erasable programmable read only memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 306 provides power to various components of device 300.
  • Power component 306 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 300.
  • the multimedia component 308 includes a screen between the device 300 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 308 includes a front camera and/or a rear camera. When the device 300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 310 is configured to output and/or input an audio signal.
  • audio component 310 includes a microphone (MIC) that is configured to receive an external audio signal when device 300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 304 or transmitted via communication component 316.
  • audio component 310 also includes a speaker for outputting an audio signal.
  • the I/O interface 312 provides an interface between the processing component 302 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 314 includes one or more sensors for providing status assessment of various aspects to device 300.
  • sensor assembly 314 can detect an open/closed state of device 300, relative positioning of components, such as the display and keypad of device 300, and sensor component 314 can also detect changes in position of one component of device 300 or device 300. The presence or absence of user contact with device 300, device 300 orientation or acceleration/deceleration, and temperature variation of device 300.
  • Sensor assembly 314 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 314 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 316 is configured to facilitate wired or wireless communication between device 300 and other devices.
  • the device 300 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 316 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 316 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 300 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 304 comprising instructions executable by processor 320 of apparatus 300 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a mobile terminal, enabling the mobile terminal to perform a search method, the method comprising:
  • the text data corresponding to the search result is displayed by using AR technology, and/or the voice data converted by the search result is output.
  • the method before the camera device is started to capture a real scene and generate a virtual object, the method further includes:
  • a search request of a user is received, the search request being generated according to a user operation of the AR search key.
  • the method before acquiring the voice data input by the user, the method further includes:
  • Launching an AR application displaying an AR search key using an AR technology in the user interface displaying the real scene and the virtual object; receiving a search request of a user, the search request being based on a user search key of the AR search key Generated by operation.
  • the virtual object includes: a virtual key or virtual prompt information
  • the virtual opening key is used to enable or end the voice input function according to the triggering operation of the user, and the virtual prompt information is used to prompt the user that the voice input function is in an open or end state;
  • Get the voice data input by the user including:
  • the virtual object includes: a virtual cartoon object, where the displaying, by using the AR technology, the text data corresponding to the search result, the method further includes:
  • the virtual cartoon object displayed in the user interface is presented in an animated image that answers the question.
  • the method before the displaying, by using the AR technology, the text data corresponding to the search result, the method further includes:
  • the virtual cartoon object displayed in the user interface is presented in an animated image of a questioning question.
  • the method further includes: obtaining user information;
  • the sending the text data to the search server includes: transmitting the text data and the user information to a search server;
  • Receiving the search result corresponding to the text data returned by the search server includes: receiving the text data returned by the search server and a search result corresponding to the user information.
  • displaying the text data corresponding to the search result by using an AR technology including:
  • the AR data is used to display text data corresponding to the search result in a plurality of screen scrolling manners, wherein text data corresponding to one search result is displayed in each screen.
  • it also includes:
  • the server 400 can vary considerably depending on configuration or performance, and can include one or more central processing units (CPUs) 422 (eg, one or more processors) and memory 432, one or one
  • the storage medium 430 (for example, one or one storage device in Shanghai) that stores the application 442 or the data 444 above.
  • the memory 432 and the storage medium 430 may be short-term storage or persistent storage.
  • the program stored on storage medium 430 may include one or more modules (not shown), each of which may include a series of instruction operations in the server.
  • central processor 422 can be configured to communicate with storage medium 430, executing a series of instruction operations in storage medium 430 on server 400.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种搜索方法及相关装置,所述方法包括:启动摄像装置拍摄现实场景并生成虚拟对象;利用AR技术在用户界面中显示所述现实场景和所述虚拟对象;获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。可见,本发明实施例不仅可以更直观的获取多个搜索结果进行查看,从而提高了搜索效率,而且通过与虚拟对象的语音交互增强了现实场景感从而提高了用户体验。

Description

一种搜索方法及相关装置
本申请要求于2018年2月6日提交中国专利局、申请号为201810118980.3、发明名称为“一种搜索方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息技术领域,尤其是涉及一种搜索方法及相关装置。
背景技术
传统的搜索方式中,用户在搜索引擎的搜索输入框中以文字形式输入搜索内容,搜索引擎检测到搜索内容对应的多个搜索结果,并且以文字列表的方式进行显示。
然而,这种传统的搜索方式,用户需要对以文字列表方式显示的多个搜索结果进行查看,不仅搜索效率较低,而且现实场景感不强,导致用户体验较差。
发明内容
本申请解决的技术问题在于提供一种搜索方法及相关装置,能够将增强现实(Augmented Reality,AR)技术融合到搜索过程中,不仅提高了搜索效率,而且增强了现实场景感从而提高了用户体验。
为此,本申请解决技术问题的技术方案是:
本申请实施例提供了一种搜索方法,包括:
启动摄像装置拍摄现实场景并生成虚拟对象;
利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;
在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
可选的,启动摄像装置拍摄现实场景并生成虚拟对象之前,所述方法还包括:
启动搜索应用程序或者搜索网页,在用户界面中显示AR搜索键;
接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,获取用户输入的语音数据之前,所述方法还包括:
启动AR应用程序,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR搜索键;接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
获取用户输入的语音数据,包括:
获取用户在语音输入功能处于开启状态时输入的语音数据。
可选的,所述虚拟对象包括:虚拟卡通对象,所述利用AR技术显示所述搜索结果对应的文本数据时,还包括:
将在所述用户界面中显示的所述虚拟卡通对象以回答问题的动画形象展现。
可选的,所述利用AR技术显示所述搜索结果对应的文本数据之前,还包括:
将在所述用户界面中显示的所述虚拟卡通对象以思考问题的动画形象展现。
可选的,还包括:获取用户信息;
所述将所述文本数据发送至搜索服务器,包括:将所述文本数据和所述用户信息发送至搜索服务器;
接收所述搜索服务器返回的所述文本数据对应的搜索结果,包括:接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
可选的,利用AR技术显示所述搜索结果对应的文本数据,包括:
利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据, 其中每幅画面中显示一个搜索结果对应的文本数据。
可选的,还包括:
获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
可选的,还包括:
获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
本申请提供了一种搜索装置,包括:启动模块、生成模块、显示模块、转换模块、发送模块和接收模块;
所述启动模块,用于启动摄像装置拍摄现实场景;
所述生成模块,用于生成虚拟对象;
所述显示模块,用于利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
所述转换模块,用于获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
所述发送模块,用于将所述文本数据发送至搜索服务器;
所述接收模块,用于接收所述搜索服务器返回的所述文本数据对应的搜索结果;
所述显示模块还用于,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或,所述装置还包括输出模块,用于输出所述搜索结果转换成的语音数据。
可选的,所述启动模块还用于启动搜索应用程序或者搜索网页,所述显示模块还用于在用户界面中显示AR搜索键;
所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述启动模块还用于启动AR应用程序,所述显示模块还用于在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR 搜索键;所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
获取用户输入的语音数据时,所述转换模块具体用于获取用户在语音输入功能处于开启状态时输入的语音数据。
可选的,所述虚拟对象包括:虚拟卡通对象,所述显示模块还用于,利用AR技术显示所述搜索结果对应的文本数据时,将在所述用户界面中显示的所述虚拟卡通对象以回答问题的动画形象展现。
可选的,所述显示模块还用于,显示所述搜索结果对应的文本数据之前,将在所述用户界面中显示的所述虚拟卡通对象以思考问题的动画形象展现。
可选的,还包括:
第一获取模块,用于获取用户信息;
所述发送模块具体用于,将所述文本数据和所述用户信息发送至搜索服务器;
所述接收模块具体用于,接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
可选的,利用AR技术显示所述搜索结果对应的文本数据时,所述显示模块具体用于,利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。
可选的,还包括第二获取模块,用于获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
可选的,还包括第三获取模块,用于获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
本申请实施例提供了一种用于搜索的装置,包括有存储器,以及一个或者 一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:
启动摄像装置拍摄现实场景并生成虚拟对象;
利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;
在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
本申请实施例还提供了一种机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如上所述的任一种方法实施例。
通过上述技术方案可知,本发明实施例中,启动摄像装置拍摄现实场景并生成虚拟对象;利用AR技术在同一用户界面中显示所述现实场景和所述虚拟对象;获取并将用户输入的语音数据转换成对应的文本数据,发送至搜索服务器并获取搜索服务器对文本数据的搜索结果,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
可见,本发明实施例中将AR技术融合到搜索过程中,利用AR技术在同一用户界面中显示现实场景和虚拟对象,并且模拟与虚拟对象的语音交互,通过AR技术或者语音方式展现搜索结果。因此,本发明实施例不仅可以更直观的获取多个搜索结果进行查看,从而提高了搜索效率,而且通过与虚拟对象的语音交互增强了现实场景感从而提高了用户体验。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其它的附图。
图1为本申请实施例提供的一种方法实施例的流程示意图;
图2为本申请实施例提供的一种装置实施例的结构示意图;
图3是根据一示例性实施例示出的一种用于搜索的装置300的框图;
图4是本发明实施例中服务器的结构示意图。
具体实施方式
传统的搜索方式中,用户在搜索引擎的搜索输入框中以文字形式输入搜索内容,搜索引擎检测到搜索内容对应的多个搜索结果,并且以文字列表的方式进行显示。例如,用户在搜索输入框中输入“天气”,搜索引擎检索到“天气”对应的多个搜索结果,例如“北京天气情况:XX”,“上海天气情况:XX”等等,并通过文字列表的方式进行显示。
然而,这种传统的搜索方式,用户需要对以文字列表方式显示的多个搜索结果进行查看,不仅搜索效率较低,而且现实场景感不强,导致用户体验较差。
为了解决上述技术问题,本申请提供一种搜索方法及相关装置,能够将AR技术融合到搜索过程中,使得用户可以更直观的获取搜索结果进行查看,提高了搜索效率,而且通过与虚拟对象的语音交互增强了现实场景感从而提高了用户体验。
为了更好地理解本发明的技术方案,下面对AR技术的定义及应用情况进行说明。
AR技术是一种在摄像装置拍摄的现实场景中叠加上相应图像、视频、3D模型等虚拟对象的技术,这种技术的目标是在屏幕上把虚拟对象套在现实场景并进行互动。目前已经出现很多AR技术相关的产品,例如利用AR技术在现实场景中进行虚拟宠物养成类的游戏,或者利用AR技术在现实场景中寻找虚拟红包。然而,这些与AR技术相关的产品,用户往往通过点击屏幕实现交互,例如,用户通过点击屏幕上相应的虚拟按键,使得虚拟宠物执行相应的动作。这种方式中并不能模拟用户与虚拟宠物在真实场景中的交互,例如用户无法与虚拟宠物进行语音对话,降低了用户体验。
而在将AR技术融合到搜索过程时,如果通过点击屏幕实现互动,例 如在屏幕上显示输入键盘并通过点击屏幕等方式输入搜索内容,对现实场景感有较大的影响,用户体验很差。而为了解决该技术问题,本发明实施例采用了一种更贴近真实场景的交互方式,增强了用户体验。下面具体说明本发明实施例的技术方案。
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
请参阅图1,本发明实施例提供了搜索方法的一种方法实施例。本实施例具体可以用于终端设备(也可以称为用户设备),常见的终端设备例如包括:手机、平板电脑、笔记本电脑、掌上电脑、移动互联网设备、可穿戴设备(例如头戴式设备)、个人数字助理(Personal Digital Assistant,PDA)和膝上型便携计算机等。
本实施例的所述方法包括:
S101:启动摄像装置拍摄现实场景并生成虚拟对象。
在本发明实施例中,摄像装置指的是具有拍摄和/或录像功能的装置。摄像装置可以内置在终端设备中,例如终端设备为手机时,摄像装置可以为手机内置的摄像头;摄像装置也可以为外置设备,例如,摄像装置为外置的头戴设备上的摄像头,头戴设备与终端设备之间通过通信网络进行连接,终端设备通过向头戴设备发送开启指令,启动头戴设备上的摄像头。
在本发明实施例中,虚拟对象指的是用于和用户进行语音互动的虚拟化的对象。其中,虚拟对象可以包括虚拟开关键、虚拟提示信息和虚拟卡通形象中的任一种或者多种。下面举例说明。
在一种可选的实施例中,虚拟对象可以包括虚拟开关键和/或虚拟提示信息。
其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能。例如,虚拟开关键为显示在用户界面的虚拟按键,用户通过点击用户界面上显示的虚拟按键,开启语音输入功能。此时用户可以输入语音数据,用户 可以通过再次点击虚拟按键,关闭语音输入功能,或者可以在预设时间之后(可以通过显示时间进度条、倒计时等提示剩余时间)自动关闭语音输入功能,或者也可以自动检测到用户完成输入的语音数据之后,关闭语音输入功能。
虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态。例如终端设备开启语音输入功能并在用户界面上显示有虚拟信息:“已开启语音输入功能,可以输入语音”、或者通过显示时间进度条、倒计时等提示剩余时间。此时用户可以输入语音数据,在预设时间之后或者检测到用户操作后,关闭语音输入功能。
在另一种可选的实施例中,虚拟对象可以包括虚拟卡通形象,虚拟卡通形象用于模拟与用户之间的问答过程。其中可以通过触摸等方式触发虚拟卡通形象的动画效果,或者还可以对虚拟卡通形象进行形象选择和更换。
在本发明实施例中,可以在传统的文字搜索过程中,通过操作相应的AR搜索键切换到本发明实施例中基于AR技术的搜索过程。
在本发明实施例中,可以在搜索应用程序(即具有搜索功能的应用程序)或者搜索网页中设置AR搜索功能。可以例如,启动搜索应用程序(即具有搜索功能的应用程序)或者搜索网页后,在用户界面中显示有搜索输入框和AR搜索键,搜索输入框用于输入文字形式的搜索内容,支持用户基于文字进行搜索。当用户想要实施基于AR技术的搜索时,触发该AR搜索键,终端设备接收用户的搜索请求,其中该搜索请求根据用户对所述AR搜索键的操作而生成,根据所述用户的搜索请求,启动摄像装置拍摄现实场景并生成虚拟对象。
或者,本发明实施例中也可以在AR相关的应用程序嵌入搜索功能,通过在AR应用程序中设置相应的AR搜索键实施本发明实施例中基于AR技术的搜索。可以例如,打开AR应用程序,在显示现实场景和虚拟对象的用户界面中,利用AR技术显示AR搜索键。当用户想要实施基于AR技术的搜索时,触发该AR搜索键。终端设备接收用户的搜索请求,其中该搜索请求根据用户对所述AR搜索键的操作而生成,根据所述用户的搜索请求,获取语音数据。
S102:利用AR技术在用户界面中显示所述现实场景和所述虚拟对象。
例如,虚拟对象可以包括虚拟开关键和虚拟卡通形象,在同一用户界面中将摄像装置实时拍摄的现实场景、虚拟开关键和虚拟卡通形象叠加显示。
S103:获取用户输入的语音数据,将所述语音数据转换成对应的文本数据。
在本发明实施例中,不需要在屏幕上显示输入键盘并通过点击屏幕等方式输入搜索内容,而是通过语音输入,模拟用户与虚拟对象在真实场景中的交互,从而增强了用户体验。其中,语音数据可以用于描述用户提出的问题。
其中,可以在开启语音输入功能时,获取用户输入的语音数据。
在一种实施例中,用户可以通过点击用户界面上显示的虚拟按键,开启语音输入功能,此时用户可以输入语音数据,例如“北京天气”,输入完毕之后,用户可以通过再次点击虚拟按键关闭语音输入功能,或者可以在预设时间之后(可以通过显示时间进度条、倒计时等提示剩余时间)自动关闭语音输入功能,或者也可以自动检测到用户完成输入的语音数据之后,关闭语音输入功能,获取用户在开启语音输入功能时输入的语音数据,并转换成对应的文本数据。
在另一种实施例中,可以自动开启语音输入功能,例如进入到搜索应用程序之后即开启语音输入功能,当检测到用户输入语音数据时,获取该语音数据,此时,可以通过点击虚拟按键关闭语音输入功能,或者可以在预设时间之后(可以通过显示时间进度条、倒计时等提示剩余时间)自动关闭语音输入功能,或者也可以自动检测到用户完成输入的语音数据之后,关闭语音输入功能。其中,在开启语音输入功能时,可以通过终端设备中的麦克风获取用户输入的语音数据。
S104:将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果。
其中,所述搜索服务器可以具体是搜索引擎中用于进行数据搜索的服务器。搜索服务器可以根据文本数据,搜索到对应的搜索结果。若用户输入的语音数据用于描述用户提出的问题,搜索服务器返回的搜索结果可以用于对用户提出的问题进行回复。例如,用户输入的语音数据转换为的文本数据为:“北京天气”,搜索服务器搜索到对应的搜索结果:“北京天气:最低气温-7度,最高气温5度,多云,西北风2级,感觉有点凉,室外活动”。
S105:在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语 音数据。
本发明实施例中提供了多种搜索结果的输出方式,例如,可以利用AR技术,在用户界面上叠加显示搜索结果对应的文本数据。其中,当虚拟对象包括虚拟卡通形象时,虚拟卡通形象可以模拟与用户之间的问答过程,因此在叠加显示搜索结果对应的文本数据时,虚拟卡通对象可以以回答问题的动画形象展现,例如虚拟卡通形象展现正在说话的动画形象。其中,在显示所述搜索结果对应的文本数据之前,该虚拟卡通形象可以以思考问题的动画形象展现。
又例如,可以将搜索结果转换成语音数据并输出,即播放该语音数据,进一步模拟用户与虚拟对象在真实场景中的交互,此时虚拟卡通形象也可以以回答问题的动画形象展现。又例如,叠加显示搜索结果对应的文本数据,并同时输出搜索结果转换成的语音数据。
其中,如果接收搜索服务器返回的搜索结果超时,例如网络信号较差、或者搜索服务器未能搜索到匹配度较高的搜索结果时,可以生成提示文本数据,用于提示搜索失败,并且在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示提示文本数据,和/或输出所述提示文本数据转换成的语音数据。
可见,本发明实施例中将AR技术融合到搜索过程中,利用AR技术在同一用户界面中显示现实场景和虚拟对象,并且模拟与虚拟对象的语音交互,通过AR技术或者语音方式展现搜索结果。因此,本发明实施例不仅可以更直观的获取多个搜索结果进行查看,从而提高了搜索效率,而且通过与虚拟对象的语音交互增强了现实场景感从而提高了用户体验。其中需要强调的是,本发明实施例不需要在屏幕上显示输入键盘并通过点击屏幕等方式输入搜索内容,而是通过语音输入模拟用户与虚拟对象在真实场景中的交互,因此增强了用户体验。
在本发明实施例中,通过搜索服务器进行搜索时,还可以通过用户信息进一步筛选搜索结果。具体地,还包括:获取用户信息;所述将所述文本数据发送至搜索服务器,包括:将所述文本数据和所述用户信息发送至搜索服务器;接收所述搜索服务器返回的所述文本数据对应的搜索结果,包括:接收所述搜 索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
其中,用户信息可以为用户标识、用户属性信息(例如用户位置信息、用户性别、用户喜好等)、用户历史行为信息(例如用户历史搜索信息)等。举例说明,用户输入语音数据并转换为文本数据:“今天的天气”,并且获取用户的位置信息:北京,将该位置信息和文本数据发送至搜索服务器进行关联搜索。
在本发明实施例中,为了进一步增强现实场景感,提高用户的搜索效率,可以只向用户输出一个搜索结果,例如,搜索服务器在搜索文本数据对应的搜索结果时,搜索匹配度最高的一个搜索结果,并将该一个搜索结果返回至终端设备,或者,终端设备接收到搜索服务器返回的多个搜索结果,对多个搜索结果进行筛选,输出筛选后的匹配度最高的一个搜索结果。
而当搜索结果为多个并通过文本形式显示时,为了提高用户的搜索效率并提高用户体验,可以滚动播出搜索结果。具体地,利用AR技术显示所述搜索结果对应的文本数据,包括:利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。当搜索结果为多个并通过语音形式播放时,可以对搜索结果的语音数据进行自然语言的处理,以自然语言对话的形式进行播放。
在本发明实施例中,可以对搜索过程进行拍照和/或录像,并且可以对拍照得到的图像和/或录像得到的视频进行保存、分享等操作。下面分别说明。
对搜索过程拍照的过程可以包括:获取所述用户界面的显示内容的截屏图像,其中,若在搜索结果以文本形式展现时进行截屏,则截屏图像中显示有所述现实场景、所述虚拟卡通对象和所述搜索结果对应的文本数据。
对搜索过程录像的过程可以包括:获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和所述虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。例如,对整个搜索过程进行录像,则截屏视频记录有动态变化的现实场景和虚拟卡通对象、用户输入的语音数据、显示的所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据。
对应上述方法实施例,本申请还提供了相应的装置实施例,下面具体说明。
请参阅图2,本申请提供了搜索装置的一种装置实施例,包括:启动模块 201、生成模块202、显示模块203、转换模块204、发送模块205和接收模块206。
所述启动模块201,用于启动摄像装置拍摄现实场景。
所述生成模块202,用于生成虚拟对象。
所述显示模块203,用于利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象。
所述转换模块204,用于获取用户输入的语音数据,将所述语音数据转换成对应的文本数据。
所述发送模块205,用于将所述文本数据发送至搜索服务器。
所述接收模块206,用于接收所述搜索服务器返回的所述文本数据对应的搜索结果。
所述显示模块203还用于,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或,所述装置还包括输出模块207,用于输出所述搜索结果转换成的语音数据。
可选的,所述启动模块还用于启动搜索应用程序或者搜索网页,所述显示模块还用于在用户界面中显示AR搜索键;
所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述启动模块还用于启动AR应用程序,所述显示模块还用于在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR搜索键;所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
获取用户输入的语音数据时,所述转换模块具体用于获取用户在语音输入功能处于开启状态时输入的语音数据。
可选的,所述虚拟对象包括:虚拟卡通对象,所述显示模块还用于,利用AR技术显示所述搜索结果对应的文本数据时,将在所述用户界面中显示的所 述虚拟卡通对象以回答问题的动画形象展现。
可选的,所述显示模块还用于,显示所述搜索结果对应的文本数据之前,将在所述用户界面中显示的所述虚拟卡通对象以思考问题的动画形象展现。
可选的,还包括:
第一获取模块,用于获取用户信息;
所述发送模块具体用于,将所述文本数据和所述用户信息发送至搜索服务器;
所述接收模块具体用于,接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
可选的,利用AR技术显示所述搜索结果对应的文本数据时,所述显示模块具体用于,利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。
可选的,还包括第二获取模块,用于获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
可选的,还包括第三获取模块,用于获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图3是根据一示例性实施例示出的一种用于搜索的装置300的框图。例如,装置300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图3,装置300可以包括以下一个或多个组件:处理组件302,存储器304,电源组件306,多媒体组件308,音频组件310,输入/输出(I/O)的接口312,传感器组件314,以及通信组件316。
处理组件302通常控制装置300的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件302可以包括一个或多 个处理器320来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件302可以包括一个或多个模块,便于处理组件302和其他组件之间的交互。例如,处理部件302可以包括多媒体模块,以方便多媒体组件308和处理组件302之间的交互。
存储器304被配置为存储各种类型的数据以支持在设备300的操作。这些数据的示例包括用于在装置300上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器304可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件306为装置300的各种组件提供电力。电源组件306可以包括电源管理系统,一个或多个电源,及其他与为装置300生成、管理和分配电力相关联的组件。
多媒体组件308包括在所述装置300和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件308包括一个前置摄像头和/或后置摄像头。当设备300处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件310被配置为输出和/或输入音频信号。例如,音频组件310包括一个麦克风(MIC),当装置300处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器304或经由通信组件316发送。在一些实施例中,音频组件310还包括一个扬声器,用于输出音频信号。
I/O接口312为处理组件302和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件314包括一个或多个传感器,用于为装置300提供各个方面的状态评估。例如,传感器组件314可以检测到设备300的打开/关闭状态,组件的相对定位,例如所述组件为装置300的显示器和小键盘,传感器组件314还可以检测装置300或装置300一个组件的位置改变,用户与装置300接触的存在或不存在,装置300方位或加速/减速和装置300的温度变化。传感器组件314可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件314还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件314还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件316被配置为便于装置300和其他设备之间有线或无线方式的通信。装置300可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件316经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件316还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置300可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子组件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器304,上述指令可由装置300的处理器320执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行一种搜索方法,所述方法包括:
启动摄像装置拍摄现实场景并生成虚拟对象;
利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;
在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
可选的,启动摄像装置拍摄现实场景并生成虚拟对象之前,所述方法还包括:
启动搜索应用程序或者搜索网页,在用户界面中显示AR搜索键;
接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,获取用户输入的语音数据之前,所述方法还包括:
启动AR应用程序,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR搜索键;接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
可选的,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
获取用户输入的语音数据,包括:
获取用户在语音输入功能处于开启状态时输入的语音数据。
可选的,所述虚拟对象包括:虚拟卡通对象,所述利用AR技术显示所述搜索结果对应的文本数据时,还包括:
将在所述用户界面中显示的所述虚拟卡通对象以回答问题的动画形象展现。
可选的,所述利用AR技术显示所述搜索结果对应的文本数据之前,还包括:
将在所述用户界面中显示的所述虚拟卡通对象以思考问题的动画形象展现。
可选的,还包括:获取用户信息;
所述将所述文本数据发送至搜索服务器,包括:将所述文本数据和所述用户信息发送至搜索服务器;
接收所述搜索服务器返回的所述文本数据对应的搜索结果,包括:接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
可选的,利用AR技术显示所述搜索结果对应的文本数据,包括:
利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。
可选的,还包括:
获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
可选的,还包括:
获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
图4是本发明实施例中服务器的结构示意图。该服务器400可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)422(例如,一个或一个以上处理器)和存储器432,一个或一个以上存储应用程序442或数据444的存储介质430(例如一个或一个以上海量存储设备)。其中,存储器432和存储介质430可以是短暂存储或持久存储。存储在存储介质430的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器422可以设置为与存储介质430通信,在服务器400上执行存储介质430中的一系列指令操作。
服务器400还可以包括一个或一个以上电源426,一个或一个以上有线或无线网络接口450,一个或一个以上输入输出接口458,一个或一个以上键盘456,和/或,一个或一个以上操作系统441,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本发明旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (22)

  1. 一种搜索方法,其特征在于,包括:
    启动摄像装置拍摄现实场景并生成虚拟对象;
    利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
    获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
    将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;
    在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
  2. 根据权利要求1所述的搜索方法,其特征在于,启动摄像装置拍摄现实场景并生成虚拟对象之前,所述方法还包括:
    启动搜索应用程序或者搜索网页,在用户界面中显示AR搜索键;
    接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
  3. 根据权利要求1所述的搜索方法,其特征在于,获取用户输入的语音数据之前,所述方法还包括:
    启动AR应用程序,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR搜索键;接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
  4. 根据权利要求1所述的搜索方法,其特征在于,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
    其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
    获取用户输入的语音数据,包括:
    获取用户在语音输入功能处于开启状态时输入的语音数据。
  5. 根据权利要求1所述的搜索方法,其特征在于,所述虚拟对象包括:虚拟卡通对象,所述利用AR技术显示所述搜索结果对应的文本数据时,还包括:
    将在所述用户界面中显示的所述虚拟卡通对象以回答问题的动画形象展 现。
  6. 根据权利要求5所述的搜索方法,其特征在于,所述利用AR技术显示所述搜索结果对应的文本数据之前,还包括:
    将在所述用户界面中显示的所述虚拟卡通对象以思考问题的动画形象展现。
  7. 根据权利要求1所述的搜索方法,其特征在于,还包括:获取用户信息;
    所述将所述文本数据发送至搜索服务器,包括:将所述文本数据和所述用户信息发送至搜索服务器;
    接收所述搜索服务器返回的所述文本数据对应的搜索结果,包括:接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
  8. 根据权利要求1所述的搜索方法,其特征在于,利用AR技术显示所述搜索结果对应的文本数据,包括:
    利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。
  9. 根据权利要求1所述的搜索方法,其特征在于,还包括:
    获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
  10. 根据权利要求1所述的搜索方法,其特征在于,还包括:
    获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
  11. 一种搜索装置,其特征在于,包括:启动模块、生成模块、显示模块、转换模块、发送模块和接收模块;
    所述启动模块,用于启动摄像装置拍摄现实场景;
    所述生成模块,用于生成虚拟对象;
    所述显示模块,用于利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
    所述转换模块,用于获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
    所述发送模块,用于将所述文本数据发送至搜索服务器;
    所述接收模块,用于接收所述搜索服务器返回的所述文本数据对应的搜索结果;
    所述显示模块还用于,在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示所述搜索结果对应的文本数据,和/或,所述装置还包括输出模块,用于输出所述搜索结果转换成的语音数据。
  12. 根据权利要求11所述的搜索装置,其特征在于,所述启动模块还用于启动搜索应用程序或者搜索网页,所述显示模块还用于在用户界面中显示AR搜索键;
    所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
  13. 根据权利要求11所述的搜索装置,其特征在于,所述启动模块还用于启动AR应用程序,所述显示模块还用于在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显示AR搜索键;所述接收模块还用于接收用户的搜索请求,所述搜索请求根据用户对所述AR搜索键的操作而生成。
  14. 根据权利要求11所述的搜索装置,其特征在于,所述虚拟对象包括:虚拟开关键或者虚拟提示信息;
    其中,所述虚拟开关键用于根据用户的触发操作,开启或者结束语音输入功能,所述虚拟提示信息用于提示用户语音输入功能处于开启或者结束状态;
    获取用户输入的语音数据时,所述转换模块具体用于获取用户在语音输入功能处于开启状态时输入的语音数据。
  15. 根据权利要求11所述的搜索装置,其特征在于,所述虚拟对象包括:虚拟卡通对象,所述显示模块还用于,利用AR技术显示所述搜索结果对应的文本数据时,将在所述用户界面中显示的所述虚拟卡通对象以回答问题的动画形象展现。
  16. 根据权利要求15所述的搜索装置,其特征在于,所述显示模块还用于,显示所述搜索结果对应的文本数据之前,将在所述用户界面中显示的所述 虚拟卡通对象以思考问题的动画形象展现。
  17. 根据权利要求11所述的搜索装置,其特征在于,还包括:
    第一获取模块,用于获取用户信息;
    所述发送模块具体用于,将所述文本数据和所述用户信息发送至搜索服务器;
    所述接收模块具体用于,接收所述搜索服务器返回的所述文本数据和所述用户信息对应的搜索结果。
  18. 根据权利要求11所述的搜索装置,其特征在于,利用AR技术显示所述搜索结果对应的文本数据时,所述显示模块具体用于,利用AR技术,以多幅画面滚动的方式显示所述搜索结果对应的文本数据,其中每幅画面中显示一个搜索结果对应的文本数据。
  19. 根据权利要求11所述的搜索装置,其特征在于,还包括第二获取模块,用于获取所述用户界面的显示内容的截屏图像,所述截屏图像中显示有所述现实场景、虚拟卡通对象和所述搜索结果对应的文本数据。
  20. 根据权利要求11所述的搜索装置,其特征在于,还包括第三获取模块,用于获取所述用户界面的显示内容的录制视频,所述录制视频中记录有动态变化的所述现实场景和虚拟卡通对象,还记录有用户输入的语音数据、所述搜索结果对应的文本数据和所述搜索结果转换成的语音数据中的任一种或多种数据。
  21. 一种用于搜索的装置,其特征在于,包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:
    启动摄像装置拍摄现实场景并生成虚拟对象;
    利用增强现实AR技术在用户界面中显示所述现实场景和所述虚拟对象;
    获取用户输入的语音数据,将所述语音数据转换成对应的文本数据;
    将所述文本数据发送至搜索服务器,接收所述搜索服务器返回的所述文本数据对应的搜索结果;
    在显示所述现实场景和所述虚拟对象的所述用户界面中,利用AR技术显 示所述搜索结果对应的文本数据,和/或输出所述搜索结果转换成的语音数据。
  22. 一种机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如权利要求1至10中一个或多个所述的搜索方法。
PCT/CN2018/123766 2018-02-06 2018-12-26 一种搜索方法及相关装置 WO2019153925A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810118980.3A CN108345667A (zh) 2018-02-06 2018-02-06 一种搜索方法及相关装置
CN201810118980.3 2018-02-06

Publications (1)

Publication Number Publication Date
WO2019153925A1 true WO2019153925A1 (zh) 2019-08-15

Family

ID=62959059

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123766 WO2019153925A1 (zh) 2018-02-06 2018-12-26 一种搜索方法及相关装置

Country Status (2)

Country Link
CN (1) CN108345667A (zh)
WO (1) WO2019153925A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345667A (zh) * 2018-02-06 2018-07-31 北京搜狗科技发展有限公司 一种搜索方法及相关装置
CN110148406B (zh) * 2019-04-12 2022-03-04 北京搜狗科技发展有限公司 一种数据处理方法和装置、一种用于数据处理的装置
CN110545440B (zh) * 2019-09-05 2021-09-28 广州方硅信息技术有限公司 游戏交互方法、终端设备以及计算机存储介质
CN111680177A (zh) * 2020-06-01 2020-09-18 广东小天才科技有限公司 数据搜索方法及电子设备、计算机可读存储介质
CN111736799A (zh) * 2020-06-18 2020-10-02 百度在线网络技术(北京)有限公司 基于人机交互的语音交互方法、装置、设备和介质
CN113014989A (zh) * 2021-02-26 2021-06-22 拉扎斯网络科技(上海)有限公司 视频互动方法、电子设备和计算机可读存储介质
CN116940967A (zh) * 2021-04-21 2023-10-24 深圳传音控股股份有限公司 图像控制方法、移动终端及存储介质
CN116563495A (zh) 2022-01-27 2023-08-08 腾讯科技(深圳)有限公司 一种数据处理方法、计算机设备以及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103035135A (zh) * 2012-11-27 2013-04-10 北京航空航天大学 基于增强现实技术的儿童认知系统及认知方法
US20170155631A1 (en) * 2015-12-01 2017-06-01 Integem, Inc. Methods and systems for personalized, interactive and intelligent searches
CN107025683A (zh) * 2017-03-30 2017-08-08 联想(北京)有限公司 一种信息处理方法及电子设备
CN108345667A (zh) * 2018-02-06 2018-07-31 北京搜狗科技发展有限公司 一种搜索方法及相关装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105487746A (zh) * 2015-08-28 2016-04-13 小米科技有限责任公司 搜索结果的呈现方法和装置
CN105469787B (zh) * 2015-12-02 2020-07-24 百度在线网络技术(北京)有限公司 信息展示方法和装置
CN107515869B (zh) * 2016-06-15 2023-03-14 北京搜狗科技发展有限公司 一种搜索方法和装置、一种用于搜索的装置
CN106096857A (zh) * 2016-06-23 2016-11-09 中国人民解放军63908部队 增强现实版交互式电子技术手册、内容构建及辅助维修/辅助操作流程的构建
CN106383587B (zh) * 2016-10-26 2020-08-04 腾讯科技(深圳)有限公司 一种增强现实场景生成方法、装置及设备
CN106886582A (zh) * 2017-02-07 2017-06-23 广东小天才科技有限公司 一种在终端设备内置学习助手的方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103035135A (zh) * 2012-11-27 2013-04-10 北京航空航天大学 基于增强现实技术的儿童认知系统及认知方法
US20170155631A1 (en) * 2015-12-01 2017-06-01 Integem, Inc. Methods and systems for personalized, interactive and intelligent searches
CN107025683A (zh) * 2017-03-30 2017-08-08 联想(北京)有限公司 一种信息处理方法及电子设备
CN108345667A (zh) * 2018-02-06 2018-07-31 北京搜狗科技发展有限公司 一种搜索方法及相关装置

Also Published As

Publication number Publication date
CN108345667A (zh) 2018-07-31

Similar Documents

Publication Publication Date Title
WO2019153925A1 (zh) 一种搜索方法及相关装置
US11503377B2 (en) Method and electronic device for processing data
CN109637518B (zh) 虚拟主播实现方法及装置
WO2022042089A1 (zh) 直播间的互动方法及装置
CN107948708B (zh) 弹幕展示方法及装置
WO2017020408A1 (zh) 视频录制方法和装置
US11405659B2 (en) Method and terminal device for video recording
CN109729372B (zh) 直播间切换方法、装置、终端、服务器及存储介质
US20220013026A1 (en) Method for video interaction and electronic device
WO2022188305A1 (zh) 信息展示方法及装置、电子设备、存储介质及计算机程序
CN110572716B (zh) 多媒体数据播放方法、装置及存储介质
RU2663709C2 (ru) Способ и устройство для обработки информации
WO2017088247A1 (zh) 输入处理方法、装置及设备
CN105786507B (zh) 显示界面切换的方法及装置
CN111954063B (zh) 视频直播间的内容显示控制方法及装置
RU2666626C1 (ru) Способ и устройство для управления состоянием воспроизведения
WO2022198934A1 (zh) 卡点视频的生成方法及装置
WO2017219497A1 (zh) 消息生成方法及装置
US20210029304A1 (en) Methods for generating video, electronic device and storage medium
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN106331328B (zh) 信息提示的方法及装置
WO2023040202A1 (zh) 人脸识别方法及装置、电子设备和存储介质
CN108986803B (zh) 场景控制方法及装置、电子设备、可读存储介质
WO2019006768A1 (zh) 一种基于无人机的停车占位方法及装置
WO2024067468A1 (zh) 基于图像识别的交互控制方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905834

Country of ref document: EP

Kind code of ref document: A1