CN111832560A - Information output method, device, equipment and medium - Google Patents

Information output method, device, equipment and medium Download PDF

Info

Publication number
CN111832560A
CN111832560A CN202010578205.3A CN202010578205A CN111832560A CN 111832560 A CN111832560 A CN 111832560A CN 202010578205 A CN202010578205 A CN 202010578205A CN 111832560 A CN111832560 A CN 111832560A
Authority
CN
China
Prior art keywords
target
information
condition
determining
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010578205.3A
Other languages
Chinese (zh)
Inventor
张新华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010578205.3A priority Critical patent/CN111832560A/en
Publication of CN111832560A publication Critical patent/CN111832560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information output method, an information output device, information output equipment and an information output medium, and belongs to the technical field of electronic equipment. The information output method comprises the following steps: acquiring scene information; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output. The information output method, the information output device, the information output equipment and the information output medium can improve information acquisition efficiency.

Description

Information output method, device, equipment and medium
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to an information output method, device, equipment and medium.
Background
Currently, in order to facilitate users to view information, information providers often display a lot of information through various information display methods, such as a large screen, an information notification bar, and the like. For example, in train stations, hospitals, lottery stores, information is often displayed by scrolling through a large screen. The user can check the information of interest (for example, check information of waiting rooms, ticket gates, stations and the like corresponding to the train number of the user) through the large screen of the places, and the content displayed on the large screen is more and the updating speed is higher, so that the user can obtain the information of interest only when the user keeps staring at the screen.
However, in the course of implementing the present application, the inventors found that at least the following problems exist in the related art: among a plurality of pieces of displayed information, it is difficult for a user to quickly acquire information concerned by the user, and the efficiency of acquiring the information concerned by the user is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide an information output method, apparatus, device, and medium, which can solve the problem of low information acquisition efficiency.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information output method, including:
acquiring scene information;
determining a target condition according to the scene information;
acquiring a target image;
in the case where the target image includes target information matching the target condition, the target information is output.
In a second aspect, an embodiment of the present application provides an information output apparatus, including:
the first acquisition module is used for acquiring scene information;
the determining module is used for determining a target condition according to the scene information;
the second acquisition module is used for acquiring a target image;
and the output module is used for outputting the target information under the condition that the target image comprises the target information matched with the target condition.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
In the embodiment of the application, scene information is acquired; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output. The user can acquire the concerned information through the output target information without watching much information and updating fast content, so that the information acquisition efficiency can be improved.
Drawings
Fig. 1 is a schematic flowchart of an information output method according to an embodiment of the present application;
FIG. 2 is a first schematic diagram of outputting target information in an image manner according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of outputting target information in an image manner according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an information output apparatus according to an embodiment of the present application;
fig. 5 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The information output method, apparatus, device and medium provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of an information output method according to an embodiment of the present application. The information output method may include:
s101: and acquiring scene information.
S102: and determining the target condition according to the scene information.
S103: and acquiring a target image.
S104: in the case where the target image includes target information matching the target condition, the target information is output.
It should be noted that, in the information output method provided in the embodiment of the present application, the execution main body may be an information output device, or a control module in the information output device for executing the information output method. In the embodiment of the present application, an information output method executed by an information output apparatus is taken as an example, and the information output method provided in the embodiment of the present application is described.
The information output device acquires scene information; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output.
Specific implementations of the above steps will be described in detail below.
In the embodiment of the application, scene information is acquired; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output. The user can acquire the concerned information through the output target information without watching much information and updating fast content, so that the information acquisition efficiency can be improved.
Scenarios of embodiments of the present application include, but are not limited to, concert scenarios, airport scenarios, station scenarios, lottery store scenarios, hospital scenarios, and the like.
For example, assuming that a singer takes a concert somewhere at a certain time of a certain day of a month of a year, when a user determines a scene of the concert at the certain time of the day of the month of the year, the scene information of the concert, such as a photograph of the singer, is acquired. The photograph of the singer is determined as the target condition. Then, the user acquires the image on the stage by using the image acquisition assembly, namely, the target image is acquired. When the image on the stage collected by the user by using the image collection component comprises the characters matched with the photos of the singer, outputting the image of the characters matched with the photos of the singer contained by the image on the stage collected by the user by using the image collection component.
For another example, suppose that the user purchases a train ticket from location a to location B at a time of a certain day of a certain month of a certain year, and when the user is at a train station, the station scene is determined. Station scene information, such as a train station at a location, is obtained. And determining the train ticket information of the user as the target condition. When a user acquires an image of information displayed on a large screen in a waiting hall of a railway station by using the image acquisition assembly, a target image is acquired. The large screen displays information of train number, terminal station, departure time, ticket gate, station platform and state of each train. And when the information displayed on the large screen comprises information matched with the train ticket information of the user, outputting the information matched with the train ticket information of the user.
In some possible implementations of the embodiments of the present application, S104 may include: and outputting the target information in a preset output mode.
In some possible implementations of the embodiments of the present application, the preset output mode includes, but is not limited to, an image mode, a text mode, and a voice mode.
Illustratively, the target information is information matched with the train ticket information of the user.
An image of information matching the user's train ticket information may be output.
And outputting characters corresponding to the information matched with the train ticket information of the user.
And outputting the voice corresponding to the information matched with the train ticket information of the user by using the audio output unit.
Illustratively, a station scene is taken as an example. Assume that the target condition is the user's train number "G7540". Outputting the target information in an image manner is shown in fig. 2. Fig. 2 is a first schematic diagram of outputting target information in an image manner according to an embodiment of the present application. The image output in fig. 2 includes information corresponding to the train number "G7540". It can be understood that the information corresponding to the train number "G7540" is the target information.
According to the embodiment of the application, the user can obtain the target information by browsing the output image, so that the user can quickly obtain the target information, and the information acquisition speed and efficiency are improved.
In some possible implementations of the embodiments of the present application, whether the user wears the headset may be detected, and in a case where it is detected that the user wears the headset, the target information may be output in a voice manner.
Specifically, when it is detected that there is a communication connection with the headset, that is, it is detected that the user wears the headset, the target information is output in a voice manner.
In some possible implementations of the embodiments of the present application, when it is determined that the focus of the user's eyes viewing the object is a screen, the user does not wear an earphone, and the electronic device is in a mute state or a low volume state (the volume is lower than a preset volume), the target information may be preferentially output in an image manner or a text manner.
In some possible implementations of embodiments of the present application, the target information may be preferentially output in a voice manner when it is determined that the focus of the user's eye viewing is not on the screen, the user is not wearing headphones, and the electronic device is in a high volume state (volume higher than a preset volume), or it is detected that the user is wearing headphones.
In some possible implementations of embodiments of the present application, the target information may be preferentially output in a voice manner when the user wears the headset.
In some possible implementations of the embodiments of the present application, the focus of the user's eye viewing the object may be determined by another image capturing component, and when the focus of the user's eye viewing the object is determined to be a screen, the target information may be output in an image manner or a text manner.
By the embodiment of the application, the user can quickly acquire the target information, and the information acquisition speed and efficiency are improved.
In some possible implementations of the embodiment of the present application, in the process of capturing an image by using the image capturing component, each frame of image captured by the image capturing component may be sequentially used as a target image, and then it is determined whether the target image includes target information matching a target condition, that is, it is sequentially determined whether each frame of image includes target information matching the target condition. When a certain frame image includes target information matching the target condition, the target information can be output.
In some possible implementations of the embodiment of the present application, after S103 and before S104, the information output method provided in the embodiment of the present application may further include: displaying a target identification or displaying target information in a first manner in case the target image includes target information matching the target condition; the target mark is used for indicating the position of the target information on the target image. Accordingly, outputting the target information may include: and outputting a target image, wherein the target image comprises target identification or target information displayed in a first mode.
In some possible implementations of the embodiment of the present application, the first mode may be a text mode or an image mode.
The embodiment of the present application does not limit the specific form of the position of the identification target information on the target image, and any available identification form can be applied to the embodiment of the present application. For example, the target information is color-coded. As another example, the target information is subjected to a framed mark, and the like.
Illustratively, a station scene is also taken as an example. Assume that the target condition is the train number "G7540" in which the user takes. If the target image includes the target information corresponding to the vehicle number "G7540", the target information corresponding to the vehicle number "G7540" included in the target image is subjected to framing identification, the target information in which the target information corresponding to the vehicle number "G7540" is framed and identified is output, and the output result is shown in fig. 3. Fig. 3 is a second schematic diagram of outputting target information in an image manner according to an embodiment of the present application. The image output in fig. 3 includes information corresponding to the train number "G7540". And the information corresponding to the train number G7540 is marked by a frame. It can be understood that the information corresponding to the train number "G7540" is the target information.
According to the embodiment of the application, the marked area is obviously different from other areas due to the fact that the target information in the target image is marked. The user can quickly identify the mark by naked eyes, and then can acquire the target information, so that the user can quickly acquire the target information, and the information acquisition speed and efficiency are improved.
In some possible implementations of the embodiment of the present application, after outputting the target information, the information output method provided in the embodiment of the present application may further include: storing the target information in the note in a second manner; wherein the second mode comprises: at least one of text, images, and audio.
In some possible implementations of the embodiment of the application, the text content of the target information may be extracted by using an image recognition technology, and then the text content corresponding to the target information is stored in the note. Among other things, image Recognition techniques include, but are not limited to, Optical Character Recognition (OCR).
In some possible implementations of the embodiment of the application, the OCR technology may be utilized to extract the text content corresponding to the target information, then convert the text content into audio, and store the audio in the note.
Through the embodiment of the application, the target information can be stored in the note in the second mode, so that the user can check the target information conveniently, and the target information can be obtained through the note under the condition that the user forgets the target information.
In some possible implementations of the embodiment of the present application, after S102, the information output method provided in the embodiment of the present application may further include: receiving a first input in the event that determining the target condition from the context information fails; in response to the first input, a target condition is determined based on the scene information and the input content of the first input.
For example, suppose that the user sends his relatives to the site B at the site a train station, there is no train ticket information of the user. At this time, only the station scene can be determined, but since there is no train ticket information of the user, the target condition cannot be determined simply according to the scene information, that is, the target condition determination fails. At this time, the user may input train ticket information (e.g., train number) of his or her relatives, and then determine the target condition according to the scene information and the train ticket information of the relatives.
In some possible implementations of embodiments of the present application, the input content may include first content, second content, and a logical relationship. Determining the target condition according to the scene information and the input content of the first input may include: determining a first condition according to the scene information and the first content; determining a second condition according to the scene information and the second content; wherein, in the case that the logical relationship is OR, the target condition is a first condition or a second condition; in the case where the logical relationship is a sum, the target condition is a first condition and a second condition.
For example, the user sends his relatives to the train station at the a place.
Suppose that the user sends the relative X to the B place and sends the relative Y to the C place at the A place railway station, and the train ticket information of the user does not exist. At this time, the user can input train ticket information of the relatives X, train ticket information of the relatives Y, and a logical or relationship.
And determining a first condition according to the scene information and the train ticket information of the user relative X, and determining a second condition according to the scene information and the train ticket information of the user relative Y. And determining that the target condition is a first condition or a second condition according to the logic or the relation input by the user.
In some possible implementations of the embodiments of the present application, S102 may include: determining a target condition based on the context information and any one of:
default information, historical searching conditions corresponding to a scene where the user is located, preset information corresponding to the geographic position of the user and information input by the user.
In some possible implementations of the embodiment of the application, after the scene information is acquired, default information may be called, and the target condition is determined according to the scene information and the default information.
In some possible implementations of the embodiment of the application, after the scene information is acquired, for example, the acquired scene where the user is located is a concert, a historical search condition corresponding to the concert may be acquired, and the target condition is determined according to the scene information and the acquired historical search condition. For example, when the user is watching a concert in the past, the correspondence between the concert scene and the image of a certain star is recorded with the image of the star as the target condition, and when the obtained scene is the concert scene, the image of the star is used as the target condition.
In some possible implementations of the embodiment of the application, after the scene information is acquired, the geographic position of the user may be acquired, and then the target condition is determined according to the scene information and the predetermined information corresponding to the geographic position of the user.
For example, the geographic location of the user may be obtained through a Global Positioning System (GPS), and assuming that the user is located in a station scene and the user is located at a train station a, ticket booking information of the user is obtained from a train ticket booking website, and when the ticket booking information of the user includes information departing from the train station a, the number of trains included in the ticket booking information of the user may be determined as the target condition.
For another example, assuming that the acquired user is in hospital B, registration information of the user is acquired from the registration network, and when the registration information of the user includes the name of hospital B, department information included in the registration information of the user may be determined as the target condition.
According to the method and the device, the target condition is determined according to the default information, the historical search condition corresponding to the scene where the user is located or the preset information corresponding to the geographical position of the user, the target condition can be determined quickly, the user does not need to input the target condition, and the efficiency of the user for obtaining the information can be improved.
In some possible implementations of embodiments of the present application, the user may manually modify the target condition, which is determined based on information input by the user.
In some possible implementations of the embodiments of the present application, the information input by the user may be information associated with the user himself or information associated with other users (e.g., a father, a mother, etc. of the user).
According to the method and the device, the user can manually input information, and then the target condition is determined according to the information input by the user, the target condition can meet the user requirement, and the accuracy of the searched information is guaranteed.
Fig. 4 is a schematic flowchart of an information output apparatus according to an embodiment of the present application. The information output apparatus may include:
a first obtaining module 401, configured to obtain scene information.
A determining module 402, configured to determine a target condition according to the scene information.
A second obtaining module 403, configured to obtain a target image.
An output module 404, configured to output the target information if the target image includes target information matching the target condition.
In the embodiment of the application, scene information is acquired; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output. The user can acquire the concerned information through the output target information without watching much information and updating fast content, so that the information acquisition efficiency can be improved.
In some possible implementations of the embodiments of the present application, the information output apparatus provided in the embodiments of the present application may further include:
a receiving module, configured to receive a first input in a case where it is determined that the target condition fails according to the scene information;
the determining module 402 may be further configured to:
in response to the second input, a target condition is determined based on the scene information and the input content of the first input.
In some possible implementations of embodiments of the present application, the input content includes: a first content, a second content, and a logical relationship;
the determining module may be specifically configured to:
determining a first condition according to the scene information and the first content;
determining a second condition according to the scene information and the second content;
wherein, in the case that the logical relationship is OR, the target condition is a first condition or a second condition;
in the case where the logical relationship is a sum, the target condition is a first condition and a second condition.
In some possible implementations of the embodiments of the present application, the information output apparatus provided in the embodiments of the present application may further include:
a display module for displaying a target identifier or displaying target information in a first manner in a case where the target image includes target information matching the target condition; the target identification is used for indicating the position of target information on the target image;
the output module 404 may specifically be configured to:
and outputting a target image, wherein the target image comprises target identification or target information displayed in a first mode.
According to the embodiment of the application, the marked area is obviously different from other areas due to the fact that the target information in the target image is marked. The user can quickly identify the mark by naked eyes, and then can acquire the target information, so that the user can quickly acquire the target information, and the information acquisition speed and efficiency are improved.
In some possible implementations of the embodiments of the present application, the information output apparatus provided in the embodiments of the present application may further include:
the storage module is used for storing the target information in the note in a second mode; wherein the second mode comprises: at least one of text, images, and audio.
Through the embodiment of the application, the target information can be stored in the note in the second mode, so that the user can check the target information conveniently, and the target information can be obtained through the note under the condition that the user forgets the target information.
The information output device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The information output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information output device provided in the embodiment of the present application can implement each process in the information output method embodiments of fig. 1 to fig. 3, and is not described here again to avoid repetition.
Fig. 5 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and the like.
The input unit 504 may include a graphics processor 5041 and a microphone 5042. The display unit 506 may include a display panel 5061. The user input unit 507 includes a touch panel 5071 and other input devices 5072.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 510 is configured to obtain scene information; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output.
In the embodiment of the application, scene information is acquired; determining a target condition according to the scene information; acquiring a target image; in the case where the target image includes target information matching the target condition, the target information is output. The user can acquire the concerned information through the output target information without watching much information and updating fast content, so that the information acquisition efficiency can be improved.
In some possible implementations of embodiments of the present application, the user input unit 507 is configured to receive a first input in case that the determination of the target condition based on the context information fails.
Accordingly, processor 510 may be further configured to:
in response to the second input, a target condition is determined based on the scene information and the input content of the first input.
In some possible implementations of embodiments of the present application, the display unit 506 may be configured to:
displaying a target identification or displaying target information in a first manner in case the target image includes target information matching the target condition; the target mark is used for indicating the position of the target information on the target image.
Accordingly, processor 510 may be specifically configured to:
and outputting a target image, wherein the target image comprises target identification or target information displayed in a first mode.
According to the embodiment of the application, the marked area is obviously different from other areas due to the fact that the target information in the target image is marked. The user can quickly identify the mark by naked eyes, and then can acquire the target information, so that the user can quickly acquire the target information, and the information acquisition speed and efficiency are improved.
In some possible implementations of embodiments of the application, the processor 510 may be further configured to:
storing the target information in the note in a second manner; wherein the second mode comprises: at least one of text, images, and audio.
Through the embodiment of the application, the target information can be stored in the note in the second mode, so that the user can check the target information conveniently, and the target information can be obtained through the note under the condition that the user forgets the target information.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 510, a memory 509, and a program or an instruction stored in the memory 509 and capable of being executed on the processor 510, where the program or the instruction is executed by the processor 510 to implement each process of the above-mentioned information output method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information output method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above information output method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An information output method, characterized in that the method comprises:
acquiring scene information;
determining a target condition according to the scene information;
acquiring a target image;
and outputting the target information when the target image comprises the target information matched with the target condition.
2. The method of claim 1, wherein after determining a target condition based on the context information, the method further comprises:
receiving a first input if determining that the target condition fails according to the context information;
in response to the first input, determining the target condition according to the scene information and the input content of the first input.
3. The method of claim 2, wherein the inputting the content comprises: a first content, a second content, and a logical relationship;
the determining the target condition according to the scene information and the input content of the first input includes:
determining a first condition according to the scene information and the first content;
determining a second condition according to the scene information and the second content;
wherein, in the case that the logical relationship is an OR, the target condition is a first condition or a second condition;
in the case where the logical relationship is a sum, the target condition is a first condition and a second condition.
4. The method of claim 1, wherein after said acquiring a target image, prior to said outputting said target information, said method further comprises:
displaying a target identification or displaying the target information in a first manner in a case where the target image includes target information matching the target condition;
wherein the target identifier is used for indicating the position of the target information on the target image;
the outputting the target information includes:
outputting the target image, the target image including the target identification or the target information displayed in the first manner.
5. The method of claim 1, wherein after said outputting the target information, the method further comprises:
storing the target information in a note in a second manner; wherein the second mode includes: at least one of text, images, and audio.
6. An information output apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring scene information;
the determining module is used for determining a target condition according to the scene information;
the second acquisition module is used for acquiring a target image;
and the output module is used for outputting the target information under the condition that the target image comprises the target information matched with the target condition.
7. The apparatus of claim 6, further comprising:
a receiving module, configured to receive a first input when it is determined that the target condition fails according to the scene information;
the determining module is further configured to:
and responding to the second input, and determining the target condition according to the scene information and the input content of the first input.
8. The apparatus of claim 7, wherein the input content comprises: a first content, a second content, and a logical relationship;
the determining module is specifically configured to:
determining a first condition according to the scene information and the first content;
determining a second condition according to the scene information and the second content;
wherein, in the case that the logical relationship is an OR, the target condition is a first condition or a second condition;
in the case where the logical relationship is a sum, the target condition is a first condition and a second condition.
9. The apparatus of claim 6, further comprising:
a display module for displaying a target identifier or displaying the target information in a first manner if the target image includes target information matching the target condition; wherein the target identifier is used for indicating the position of the target information on the target image;
the output module is specifically configured to:
outputting the target image, the target image including the target identification or the target information displayed in the first manner.
10. The apparatus of claim 6, further comprising:
the storage module is used for storing the target information in a note in a second mode; wherein the second mode includes: at least one of text, images, and audio.
11. An electronic device, characterized in that the electronic device comprises: processor, memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the information output method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the information output method according to any one of claims 1 to 5.
CN202010578205.3A 2020-06-23 2020-06-23 Information output method, device, equipment and medium Pending CN111832560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010578205.3A CN111832560A (en) 2020-06-23 2020-06-23 Information output method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010578205.3A CN111832560A (en) 2020-06-23 2020-06-23 Information output method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN111832560A true CN111832560A (en) 2020-10-27

Family

ID=72897902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010578205.3A Pending CN111832560A (en) 2020-06-23 2020-06-23 Information output method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111832560A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239567A (en) * 2017-06-22 2017-10-10 努比亚技术有限公司 A kind of recognition methods of object scene, equipment and computer-readable recording medium
CN110808048A (en) * 2019-11-13 2020-02-18 联想(北京)有限公司 Voice processing method, device, system and storage medium
CN111176604A (en) * 2019-11-04 2020-05-19 广东小天才科技有限公司 Method for outputting message information, intelligent sound box and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239567A (en) * 2017-06-22 2017-10-10 努比亚技术有限公司 A kind of recognition methods of object scene, equipment and computer-readable recording medium
CN111176604A (en) * 2019-11-04 2020-05-19 广东小天才科技有限公司 Method for outputting message information, intelligent sound box and storage medium
CN110808048A (en) * 2019-11-13 2020-02-18 联想(北京)有限公司 Voice processing method, device, system and storage medium

Similar Documents

Publication Publication Date Title
CN107818180B (en) Video association method, video display device and storage medium
US8275414B1 (en) User augmented reality for camera-enabled mobile devices
KR20180072503A (en) Information processing method, information processing apparatus, terminal and server
US20150116540A1 (en) Method and apparatus for applying a tag/identification to a photo/video immediately after capture
CN101354791B (en) User interface for editing photo tags
US20210240773A1 (en) Systems and methods for proactive information discovery with multiple senses
US10846804B2 (en) Electronic business card exchange system and method using mobile terminal
CN112099704A (en) Information display method and device, electronic equipment and readable storage medium
CN112035031B (en) Note generation method and device, electronic equipment and storage medium
CN111309240B (en) Content display method and device and electronic equipment
US20130086087A1 (en) Apparatus and method for generating and retrieving location-tagged content in computing device
US20180261011A1 (en) System and method for enhancing augmented reality (ar) experience on user equipment (ue) based on in-device contents
KR20170098113A (en) Method for creating image group of electronic device and electronic device thereof
CN111832560A (en) Information output method, device, equipment and medium
US20180196811A1 (en) Systems and apparatuses for searching for property listing information based on images
CN104750792A (en) User feature obtaining method and device
CN112099703B (en) Desktop pendant display method and device and electronic equipment
US20200409991A1 (en) Information processing apparatus and method, and program
US20190180042A1 (en) Image display device, image display control device, and image display control method
CN105894427A (en) 'One-standard and three-actual' data acquisition method, terminal and system
CN113377271A (en) Text acquisition method and device, computer equipment and medium
JP7145589B2 (en) Information provision system
TW202036446A (en) System, server and method for providing travel information
CN110619558B (en) Search sorting method, system and equipment for online lodging products
CN112751928B (en) Cross-terminal information interconnection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination