CN110275977B - Information display method and device - Google Patents
Information display method and device Download PDFInfo
- Publication number
- CN110275977B CN110275977B CN201910584263.4A CN201910584263A CN110275977B CN 110275977 B CN110275977 B CN 110275977B CN 201910584263 A CN201910584263 A CN 201910584263A CN 110275977 B CN110275977 B CN 110275977B
- Authority
- CN
- China
- Prior art keywords
- marker
- environment
- information
- image
- related information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 239000003550 marker Substances 0.000 claims abstract description 220
- 238000009432 framing Methods 0.000 claims abstract description 14
- 230000007613 environmental effect Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 4
- 230000015654 memory Effects 0.000 description 25
- 230000006870 function Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 238000003491 array Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/38—Outdoor scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides an information display method and device, wherein the method comprises the following steps: framing the environment to obtain image information in a framing range; identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker; and displaying the related information corresponding to the at least one marker in the image information in the view finding range. The user can acquire the related information of the markers contained in the image through image acquisition, the efficiency of the user in acquiring the information of the markers is improved, the user is prevented from carrying out secondary query on the markers to be known, and the use experience of the user is further improved.
Description
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information display method and apparatus.
Background
Currently, online content seen by a user is difficult to be fully combined in an actual environment, such as a certain scenic spot, online introduction, photos and the like, and a large access exists between the online introduction, the photos and the like and the actual scene, so that the user is difficult to associate the current environment with the introduction. Therefore, when the user actually arrives in a certain environment, only the next marker is found to know the current specific position or environment, and then the mobile phone needs to be turned on to find the related introduction of the environment. Therefore, the user cannot quickly identify the markers of the environment, and further cannot quickly identify the environment, and if the environment-related information is required to be acquired, secondary query is also required, so that the efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an information display method and device, which are used for solving one or more technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides an information display method, including:
framing the environment to obtain image information in a framing range;
identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker;
and displaying the related information corresponding to the at least one marker in the image information in the view finding range.
In one embodiment, the method further comprises:
and acquiring the space position coordinates of the environment.
In one embodiment, the identifying the content included in the image information in the view-finding range to obtain at least one marker included in the image information in the view-finding range and related information corresponding to the at least one marker includes:
determining at least one reference based on the spatial location coordinates of the environment;
and determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object.
In one embodiment, the method further comprises:
obtaining link information corresponding to at least part of the markers in the at least one marker;
and correlating the link information corresponding to the at least partial marker with the at least partial marker.
In one embodiment, the method further comprises:
detecting operation information for the first marker;
and displaying the link information of the first marker based on the operation information.
In one embodiment, the determining at least one marker included in the image information in the view-finding range and related information corresponding to the at least one marker includes:
updating the environment confidence of the current environment based on the identification accuracy probability of the key marker in the at least one marker;
determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence;
performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
In a second aspect, an embodiment of the present invention provides an information display apparatus, including:
the acquisition unit is used for framing the environment to obtain image information in a framing range;
the processing unit is used for identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker;
and the display unit displays the related information corresponding to the at least one marker in the image information in the view finding range.
In one embodiment, the processing unit is configured to obtain spatial location coordinates of the environment.
In one embodiment, the processing unit is configured to determine at least one reference object based on spatial position coordinates of the environment; and determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object.
In one embodiment, the processing unit is configured to obtain link information corresponding to at least some of the at least one marker; and correlating the link information corresponding to the at least partial marker with the at least partial marker.
In one embodiment, the processing unit is configured to detect the operation information for the first marker; and controlling the display of the link information of the first marker through the display unit based on the operation information.
In one embodiment, the processing unit is configured to update an environment confidence level of an environment in which the current environment is located based on an identification accuracy probability of a key marker in the at least one marker;
determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence;
performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
In a third aspect, an embodiment of the present invention provides an information display apparatus, where the function of the apparatus may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible design, the information display device includes a processor and a memory in a structure thereof, the memory being configured to store a program for supporting the information display device to execute the above information display method, the processor being configured to execute the program stored in the memory. The information presentation apparatus may further comprise a communication interface for communicating with other devices or a communication network.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer software instructions for use with an information presentation apparatus, including a program for executing the above-described information presentation method.
One of the above technical solutions has the following advantages or beneficial effects:
and identifying the image information in the view finding range, further identifying at least one marker contained in the image information, and finally displaying the related information of the marker in the image information. Therefore, the user can acquire the related information of the marker contained in the image through image acquisition, the efficiency of the user for acquiring the information of the marker is improved, the efficiency of the user for acquiring the related information of the environment is improved, the user can be prevented from carrying out secondary query on the related information of the environment, and the use experience of the user is improved.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
FIG. 1 shows a flow chart of an information presentation method of an embodiment of the present invention;
FIG. 2 illustrates an iterative process flow between environmental confidence and markers in an embodiment of the present invention;
FIGS. 3 and 4 are schematic diagrams of a processing scenario according to an embodiment of the present invention;
FIG. 5 is a block diagram showing the constitution of an information display device according to an embodiment of the present invention;
fig. 6 shows a block diagram of a second component of an information display device according to an embodiment of the present invention.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
In one implementation, fig. 1 shows a flowchart of an information presentation method according to an embodiment of the present invention, the method includes:
step S11: framing the environment to obtain image information in a framing range;
step S12: identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker;
step S13: and displaying the related information corresponding to the at least one marker in the image information in the view finding range.
The embodiment can be applied to terminal equipment with an image acquisition function and an image recognition function. The image acquisition function can be realized by a camera, and the image recognition function can be realized by a processor in the terminal equipment.
The image information in the view finding range may be an image displayed on the screen before the photographing key is pressed, and at this time, the image displayed on the screen may be stored in the cache. Alternatively, the image information stored in the memory may be the image information stored in the memory after the photographing key is pressed. This embodiment is not limited thereto.
The embodiment can also acquire the space position coordinates of the environment. Wherein the spatial position coordinates may include: longitude and latitude of the position where the terminal equipment is located; or the longitude and latitude and the altitude of the position of the terminal equipment. The mode of acquiring the spatial position coordinates can be acquired through a GPC module.
Identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker, wherein the method can be as follows:
matching with content contained in the image information based on an image of at least one marker contained in a local image library and related information of the at least one marker, and determining the at least one marker contained in the image and the related information corresponding to the at least one marker based on the matching degree; the image matching degree is greater than a preset threshold value, and the markers contained in the images are used as the markers contained in the image information.
Or,
determining at least one reference based on the spatial location coordinates of the environment; and determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object.
The manner of determining the at least one reference object may be: according to the space position coordinates of the current environment, the name of at least one reference object around the environment can be determined by combining with a map; an image of at least one reference may then be acquired in conjunction with the image library. Alternatively, the image of at least one reference object stored on the server side may be acquired by connecting to the server.
Or firstly, determining a key marker of image information in a view finding range based on at least one reference object determined by an image library or a space position coordinate; then selecting a first marker from image information in a view finding range, and detecting the association degree of the first marker and the key marker; when the association degree is higher than a preset threshold value, the first marker is confirmed to be successful, otherwise, the first marker fails, that is, the related information of the first marker is deleted, or the first marker is understood to be failed in recognition; and then selecting a second marker, and analogizing until the candidate markers contained in the image information in the view finding range are processed, and outputting the obtained markers as at least one marker.
The determining at least one marker and related information corresponding to the at least one marker included in the image information within the view finding range includes:
updating the environment confidence of the current environment based on the identification accuracy probability of the key marker in the at least one marker;
determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence;
performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
Referring to fig. 2, firstly, obtaining image information in a view finding range, and setting an environment confidence coefficient to be 0; then determining the identification accuracy of the key marker in at least one marker in the image information, for example, marker 1 in the graph; obtaining new environment confidence based on the identification accuracy of the key markers;
judging whether the new environment confidence coefficient is larger than the original environment confidence coefficient;
if the environment confidence coefficient is larger than the new environment confidence coefficient, setting the environment confidence coefficient to be equal to the new environment confidence coefficient, namely, the updated environment confidence coefficient, otherwise, taking the original environment confidence coefficient as the environment confidence coefficient, and understanding the original environment confidence coefficient as the updated environment confidence coefficient;
performing iterative processing based on the updated environment confidence and the identification accuracy of the auxiliary markers, determining more markers in the image information, and finally obtaining the markers in the image information; further, processing in other relevant directions is performed on the markers in the image information, for example, adding network link information may be included, which will be described in detail later.
The environment confidence level may be any value from 0 to 1, for example, the environment confidence level f=0 is set first, that is to say, the initial environment confidence level is set to 0; the environmental confidence is further adjusted according to the identification accuracy of the markers, for example, the environmental confidence can be adjusted to 0.5 according to the identification accuracy of the key markers and the identification accuracy of the auxiliary markers.
When the environment confidence reaches a certain preset value, the identification of the markers in the current image is determined to be completed, and finally a plurality of output markers and corresponding related information thereof are obtained.
The preset value may be set according to practical situations, for example, may reach 0.9, and may of course also be set to other values, which are not limited herein.
The information related to the marker may be at least one of the following: the name of the marker, the type of the marker, the function of the marker, the characteristics of the marker and consumption related information of the marker. Specifically, the information can be obtained from the local database, and/or the information can be obtained from the network side server.
Displaying the related information corresponding to the at least one marker in the image information in the view finding range, wherein the display of the related information of the marker can be determined according to the size of a display area, the priorities of a plurality of related information and the like, for example, the name priority of the marker is highest, and the name of the marker must be necessarily contained in the displayed related information; then the type of the marker and the priority of the function of the marker can be selected and displayed again; other related information may also have a corresponding priority. In addition, it is also required to combine the sizes of the display areas, for example, the current display area can only display two kinds of related information, so that only the two kinds of related information with the highest priority can be displayed. The size of the display area may be preset, or may be adjusted by the user according to the needs, which is not limited herein.
The above can be further described with reference to fig. 3 and fig. 4, for example, the image information in the current user's view range is shown in the block of fig. 3; identifying the image information to obtain the related information of the marker shown in fig. 4, for example, the name of the marker a is a, and the function is a conference center; the marker C is a cafe and is provided with an independent conference room, and can also display the characteristics, consumption level and the like.
Based on the above solution, further, this embodiment may further:
obtaining link information corresponding to at least part of the markers in the at least one marker;
and correlating the link information corresponding to the at least partial marker with the at least partial marker.
That is, link information corresponding to a part of the plurality of markers may be obtained, where the link information is used to connect to a network side server to invoke a network side resource corresponding to the link information. For example, the link information of the first marker may be specific content of the first marker on the network side server.
The method further comprises the steps of: detecting operation information for the first marker; and displaying the link information of the first marker based on the operation information.
That is, the link information of the markers may or may not be directly displayed.
When the link information of the marker is not displayed, the link information can be embodied in the marker, for example, the marker can be highlighted or other types of marks can be made on the marker to indicate that the marker can correspond to the link information.
Further, the user can determine whether the marker has link information according to the style of the marker.
When the markers have link information, for example, when the first markers have link information, if a user wants to view the corresponding content, the user can click on the first markers, so that the corresponding link information can be displayed.
The clicking on the first marker may be clicking on the first marker itself, and/or clicking on related information of the first marker may be performed.
By adopting the scheme, the image information in the view finding range can be identified, at least one marker contained in the image information is identified, and finally the related information of the marker is displayed in the image information. Therefore, the user can acquire the related information of the marker contained in the image through image acquisition, the efficiency of the user for acquiring the information of the marker is improved, the efficiency of the user for acquiring the related information of the environment is improved, the user can be prevented from carrying out secondary query on the related information of the environment, and the use experience of the user is improved.
In one implementation, fig. 5 shows a schematic diagram of an information display device according to an embodiment of the present invention, the method includes:
the acquisition unit 31 is used for framing the environment to obtain image information in a framing range;
a processing unit 32, configured to identify content included in the image information within the view-finding range, and obtain at least one marker included in the image information within the view-finding range and related information corresponding to the at least one marker;
and a display unit 33 for displaying the related information corresponding to the at least one marker in the image information in the view finding range.
The image information in the view finding range may be an image displayed on the screen before the photographing key is pressed, and at this time, the image displayed on the screen may be stored in the cache. Alternatively, the image information stored in the memory may be the image information stored in the memory after the photographing key is pressed. This embodiment is not limited thereto.
The processing unit 32 of this embodiment is configured to obtain the spatial position coordinates of the environment. Wherein the spatial position coordinates may include: longitude and latitude of the position where the terminal equipment is located; or the longitude and latitude and the altitude of the position of the terminal equipment. The mode of acquiring the spatial position coordinates can be acquired through a GPC module.
A processing unit 32, configured to match content included in the image information based on an image of at least one marker and related information of at least one marker included in the local image library, and determine at least one marker included in the image and related information corresponding to the at least one marker based on the matching degree; the image matching degree is greater than a preset threshold value, and the markers contained in the images are used as the markers contained in the image information.
Or,
a processing unit 32 for determining at least one reference object based on the spatial position coordinates of the environment; and determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object.
The manner of determining the at least one reference object may be: according to the space position coordinates of the current environment, the name of at least one reference object around the environment can be determined by combining with a map; an image of at least one reference may then be acquired in conjunction with the image library. Alternatively, the image of at least one reference object stored on the server side may be acquired by connecting to the server.
Alternatively, the processing unit 32 is configured to determine the key markers of the image information in the view-finding range based on the image library or the at least one reference object determined by the spatial position coordinates; then selecting a first marker from image information in a view finding range, and detecting the association degree of the first marker and the key marker; when the association degree is higher than a preset threshold value, the first marker is confirmed to be successful, otherwise, the first marker fails, that is, the related information of the first marker is deleted, or the first marker is understood to be failed in recognition; and then selecting a second marker, and analogizing until the candidate markers contained in the image information in the view finding range are processed, and outputting the obtained markers as at least one marker.
The processing unit 32 is configured to update an environment confidence level of the current environment based on an identification accuracy probability of a key marker in the at least one marker;
determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence;
performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
Referring to fig. 2, firstly, obtaining image information in a view finding range, and setting an environment confidence coefficient to be 0; then determining the identification accuracy of the key marker in at least one marker in the image information, for example, marker 1 in the graph; obtaining new environment confidence based on the identification accuracy of the key markers;
judging whether the new environment confidence coefficient is larger than the original environment confidence coefficient;
if the environment confidence coefficient is larger than the new environment confidence coefficient, setting the environment confidence coefficient to be equal to the new environment confidence coefficient, namely, the updated environment confidence coefficient, otherwise, taking the original environment confidence coefficient as the environment confidence coefficient, and understanding the original environment confidence coefficient as the updated environment confidence coefficient;
performing iterative processing based on the updated environment confidence and the identification accuracy of the auxiliary markers, determining more markers in the image information, and finally obtaining the markers in the image information; further, processing in other relevant directions is performed on the markers in the image information, for example, adding network link information may be included, which will be described in detail later.
The environment confidence level may be any value from 0 to 1, for example, the environment confidence level f=0 is set first, that is to say, the initial environment confidence level is set to 0; the environmental confidence is further adjusted according to the identification accuracy of the markers, for example, the environmental confidence can be adjusted to 0.5 according to the identification accuracy of the key markers and the identification accuracy of the auxiliary markers.
When the environment confidence reaches a certain preset value, the identification of the markers in the current image is determined to be completed, and finally a plurality of output markers and corresponding related information thereof are obtained.
The preset value may be set according to practical situations, for example, may reach 0.9, and may of course also be set to other values, which are not limited herein.
The information related to the marker may be at least one of the following: the name of the marker, the type of the marker, the function of the marker, the characteristics of the marker and consumption related information of the marker. Specifically, the information can be obtained from the local database, and/or the information can be obtained from the network side server.
Displaying the related information corresponding to the at least one marker in the image information in the view finding range, wherein the display of the related information of the marker can be determined according to the size of a display area, the priorities of a plurality of related information and the like, for example, the name priority of the marker is highest, and the name of the marker must be necessarily contained in the displayed related information; then the type of the marker and the priority of the function of the marker can be selected and displayed again; other related information may also have a corresponding priority. In addition, it is also required to combine the sizes of the display areas, for example, the current display area can only display two kinds of related information, so that only the two kinds of related information with the highest priority can be displayed. The size of the display area may be preset, or may be adjusted by the user according to the needs, which is not limited herein.
Based on the above scheme, further, the processing unit 32 is configured to obtain link information corresponding to at least a part of the markers in the at least one marker; and correlating the link information corresponding to the at least partial marker with the at least partial marker.
That is, link information corresponding to a part of the plurality of markers may be obtained, where the link information is used to connect to a network side server to invoke a network side resource corresponding to the link information. For example, the link information of the first marker may be specific content of the first marker on the network side server.
The processing unit 32 is configured to detect operation information for the first marker; based on the operation information, the display unit 33 is controlled to display the link information of the first marker.
That is, the link information of the markers may or may not be directly displayed.
When the link information of the marker is not displayed, the link information can be embodied in the marker, for example, the marker can be highlighted or other types of marks can be made on the marker to indicate that the marker can correspond to the link information.
Further, the user can determine whether the marker has link information according to the style of the marker.
When the markers have link information, for example, when the first markers have link information, if a user wants to view the corresponding content, the user can click on the first markers, so that the corresponding link information can be displayed.
The clicking on the first marker may be clicking on the first marker itself, and/or clicking on related information of the first marker may be performed.
By adopting the scheme, the image information in the view finding range is identified, at least one marker contained in the image information is identified, and finally the related information of the marker is displayed in the image information. Therefore, the user can acquire the related information of the marker contained in the image through image acquisition, the efficiency of the user for acquiring the information of the marker is improved, the efficiency of the user for acquiring the related information of the environment is improved, the user can be prevented from carrying out secondary query on the related information of the environment, and the use experience of the user is improved.
Fig. 6 shows a block diagram of an apparatus according to an embodiment of the invention. As shown in fig. 6, includes: memory 910 and processor 920, memory 910 stores a computer program executable on processor 920. The processor 920 implements the method in the above-described embodiments when executing the computer program. The number of the memories 910 and the processors 920 may be one or more.
The apparatus further comprises:
and the communication interface 930 is used for communicating with external equipment and carrying out data interaction transmission.
The memory 910 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 910, the processor 920, and the communication interface 930 are implemented independently, the memory 910, the processor 920, and the communication interface 930 may be connected to each other and perform communication with each other through buses. The bus may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 910, the processor 920, and the communication interface 930 are integrated on a chip, the memory 910, the processor 920, and the communication interface 930 may communicate with each other through internal interfaces.
An embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method as in any of the above embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (12)
1. An information display method, comprising:
framing the environment to obtain image information in a framing range;
identifying content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker, wherein the related information comprises at least one of the name of the marker, the type of the marker, the function of the marker, the characteristics of the marker and consumption related information of the marker, and the related information corresponding to the marker is used for identifying the environment;
displaying the related information corresponding to the at least one marker in the image information in the view finding range;
the identifying the content contained in the image information in the view finding range, and obtaining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker include:
determining at least one reference based on the spatial location coordinates of the environment; determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object;
wherein the method for determining at least one reference object is as follows: according to the space position coordinates of the current environment, determining the name of at least one reference object around the environment by combining with a map; then combining the image library to acquire an image of at least one reference object; or connecting with a server to acquire the image of at least one reference object stored at the server side.
2. The method according to claim 1, wherein the method further comprises:
and acquiring the space position coordinates of the environment.
3. The method according to claim 1, wherein the method further comprises:
obtaining link information corresponding to at least part of the markers in the at least one marker;
and correlating the link information corresponding to the at least partial marker with the at least partial marker.
4. A method according to claim 3, characterized in that the method further comprises:
detecting operation information for the first marker;
and displaying the link information of the first marker based on the operation information.
5. The method according to claim 1, wherein the determining at least one marker included in the image information within the view-finding range and the related information corresponding to the at least one marker includes:
updating the environment confidence of the current environment based on the identification accuracy probability of the key marker in the at least one marker;
determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence;
performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
6. An information display device, comprising:
the acquisition unit is used for framing the environment to obtain image information in a framing range;
the processing unit is used for identifying the content contained in the image information in the view finding range to obtain at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker, wherein the related information comprises at least one of the name of the marker, the type of the marker, the function of the marker, the characteristics of the marker and consumption related information of the marker, and the related information corresponding to the marker is used for identifying the environment;
the display unit displays the related information corresponding to the at least one marker in the image information in the view finding range;
the processing unit is used for determining at least one reference object based on the space position coordinates of the environment; determining at least one marker contained in the image information in the view finding range and related information corresponding to the at least one marker based on the content contained in the image information in the view finding range and the at least one reference object;
wherein the method for determining at least one reference object is as follows: according to the space position coordinates of the current environment, determining the name of at least one reference object around the environment by combining with a map; then combining the image library to acquire an image of at least one reference object; or connecting with a server to acquire the image of at least one reference object stored at the server side.
7. The apparatus of claim 6, wherein the processing unit is configured to obtain spatial location coordinates of the environment.
8. The apparatus of claim 6, wherein the processing unit is configured to obtain link information corresponding to at least some of the at least one marker; and correlating the link information corresponding to the at least partial marker with the at least partial marker.
9. The apparatus of claim 8, wherein the processing unit is configured to detect operational information for a first marker; and controlling the display of the link information of the first marker through the display unit based on the operation information.
10. The apparatus of claim 6, wherein the processing unit is configured to update an environmental confidence of an environment in which the current environment is located based on an identification accuracy probability of a critical marker of the at least one marker; determining an identification accuracy probability of an auxiliary marker in the at least one marker based on the updated environmental confidence; performing iterative processing on the environment confidence based on the identification accuracy probability of the auxiliary marker; and obtaining the at least one marker and corresponding related information thereof.
11. An information display device, comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-5.
12. A computer readable storage medium storing a computer program, which when executed by a processor implements the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910584263.4A CN110275977B (en) | 2019-06-28 | 2019-06-28 | Information display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910584263.4A CN110275977B (en) | 2019-06-28 | 2019-06-28 | Information display method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110275977A CN110275977A (en) | 2019-09-24 |
CN110275977B true CN110275977B (en) | 2023-04-21 |
Family
ID=67962724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910584263.4A Active CN110275977B (en) | 2019-06-28 | 2019-06-28 | Information display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110275977B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090108A (en) * | 2017-06-29 | 2018-05-29 | 北京市商汤科技开发有限公司 | Information processing method, device, electronic equipment and storage medium |
CN108646280A (en) * | 2018-04-16 | 2018-10-12 | 宇龙计算机通信科技(深圳)有限公司 | A kind of localization method, device and user terminal |
CN109308722A (en) * | 2018-11-26 | 2019-02-05 | 陕西远航光电有限责任公司 | A kind of spatial pose measuring system and method based on active vision |
CN109359596A (en) * | 2018-10-18 | 2019-02-19 | 上海电科市政工程有限公司 | A kind of highway vehicle localization method fast and accurately |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5782035B2 (en) * | 2010-08-03 | 2015-09-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | Information processing apparatus, processing method, computer program, and integrated circuit |
CN102194007B (en) * | 2011-05-31 | 2014-12-10 | 中国电信股份有限公司 | System and method for acquiring mobile augmented reality information |
US9269243B2 (en) * | 2011-10-07 | 2016-02-23 | Siemens Aktiengesellschaft | Method and user interface for forensic video search |
CN103067856A (en) * | 2011-10-24 | 2013-04-24 | 康佳集团股份有限公司 | Geographic position locating method and system based on image recognition |
CN104038522B (en) * | 2013-03-06 | 2019-06-28 | 深圳先进技术研究院 | A kind of virtual-real blending space positioning system Internet-based |
WO2016017253A1 (en) * | 2014-08-01 | 2016-02-04 | ソニー株式会社 | Information processing device, information processing method, and program |
CN104965887A (en) * | 2015-06-16 | 2015-10-07 | 安一恒通(北京)科技有限公司 | Information acquiring method and apparatus |
CN105300397A (en) * | 2015-09-17 | 2016-02-03 | 成都千易信息技术有限公司 | Method for providing hiking time in scenic region via intelligent terminal |
CN106372702B (en) * | 2016-09-06 | 2020-05-01 | 深圳市欢创科技有限公司 | Positioning identifier and positioning method thereof |
CN107328479B (en) * | 2017-07-10 | 2019-10-08 | 国网黑龙江省电力有限公司电力科学研究院 | A kind of witness marker object for power equipment |
CN109510752B (en) * | 2017-09-15 | 2021-11-02 | 阿里巴巴集团控股有限公司 | Information display method and device |
CN108364209A (en) * | 2018-02-01 | 2018-08-03 | 北京京东金融科技控股有限公司 | Methods of exhibiting, device, medium and the electronic equipment of merchandise news |
CN108827307B (en) * | 2018-06-05 | 2021-01-12 | Oppo(重庆)智能科技有限公司 | Navigation method, navigation device, terminal and computer readable storage medium |
CN109522996A (en) * | 2018-12-30 | 2019-03-26 | 湖北知本信息科技有限公司 | Constant mark object automatic identification control system |
-
2019
- 2019-06-28 CN CN201910584263.4A patent/CN110275977B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090108A (en) * | 2017-06-29 | 2018-05-29 | 北京市商汤科技开发有限公司 | Information processing method, device, electronic equipment and storage medium |
CN108646280A (en) * | 2018-04-16 | 2018-10-12 | 宇龙计算机通信科技(深圳)有限公司 | A kind of localization method, device and user terminal |
CN109359596A (en) * | 2018-10-18 | 2019-02-19 | 上海电科市政工程有限公司 | A kind of highway vehicle localization method fast and accurately |
CN109308722A (en) * | 2018-11-26 | 2019-02-05 | 陕西远航光电有限责任公司 | A kind of spatial pose measuring system and method based on active vision |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
Non-Patent Citations (2)
Title |
---|
Wei-Hao Hwang 等."Personalized Internet Advertisement Recommendation Service Based on Keyword Similarity".《IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING》.2015,第29-33页. * |
李学龙 等."大数据系统综述".《中国科学:信息科学 》.2015,第45卷(第1期),第1-44页. * |
Also Published As
Publication number | Publication date |
---|---|
CN110275977A (en) | 2019-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165162B (en) | Test method and related equipment for vehicle-mounted system | |
US8402050B2 (en) | Apparatus and method for recognizing objects using filter information | |
US20180188033A1 (en) | Navigation method and device | |
CN109829850B (en) | Image processing method, device, equipment and computer readable medium | |
CN110308468B (en) | Site recommendation method and device | |
US20110015858A1 (en) | Network system and mobile communication terminal | |
US10949669B2 (en) | Augmented reality geolocation using image matching | |
US20220076469A1 (en) | Information display device and information display program | |
CN107885800B (en) | Method and device for correcting target position in map, computer equipment and storage medium | |
CN108646280A (en) | A kind of localization method, device and user terminal | |
US20130297648A1 (en) | Server, terminal device, image search method, image processing method, and program | |
CN105466414A (en) | A navigation route recommending method and a server | |
US20230076607A1 (en) | Picture Processing Method and Related Device | |
CN111813979B (en) | Information retrieval method and device and electronic equipment | |
CN110275977B (en) | Information display method and device | |
CN109242933A (en) | Navigation mark icon display method, device and computer readable storage medium | |
CN111626990B (en) | Target detection frame processing method and device and electronic equipment | |
CN117252837A (en) | Data processing method and device for wafer test, medium and electronic equipment | |
JP6479881B2 (en) | Inspection support device, inspection support method, and program | |
US11621000B2 (en) | Systems and methods for associating a voice command with a search image | |
US20110099564A1 (en) | Program calling system and method | |
CN112988810A (en) | Information searching method, device and equipment | |
CN111325148A (en) | Method, device and equipment for processing remote sensing image and storage medium | |
CN112307140A (en) | One-key navigation method and device, electronic equipment and storage medium | |
JP6575221B2 (en) | Display control method, information processing apparatus, and display control program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |