CN111710017A - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111710017A
CN111710017A CN202010509733.3A CN202010509733A CN111710017A CN 111710017 A CN111710017 A CN 111710017A CN 202010509733 A CN202010509733 A CN 202010509733A CN 111710017 A CN111710017 A CN 111710017A
Authority
CN
China
Prior art keywords
image
determining
cell
real
introduction information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010509733.3A
Other languages
Chinese (zh)
Inventor
邱瑞翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202010509733.3A priority Critical patent/CN111710017A/en
Publication of CN111710017A publication Critical patent/CN111710017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a display method, a display device and electronic equipment, wherein a real cell image is collected and displayed in real time, then at least one introduction information of a target cell indicated by the real cell image is determined, and finally a first enhanced image generated according to the at least one introduction information is displayed on the displayed real cell image; therefore, a new display mode can be provided, information for introducing the target cell (including objects in the target cell) can be displayed on the real cell image acquired in real time through the new display mode, a user can quickly know some basic conditions of the target cell, and the time cost of the user is saved. That is, the user can intuitively know the layout, the environmental condition, the supporting facilities and the like of the target cell.

Description

Display method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a display method and apparatus, and an electronic device.
Background
With the development of the internet, users increasingly use terminal devices to realize various functions. For example, a user can browse and search the house source information of the cell to be sold through the terminal device, so that the user can obtain more house source information of the cell to be sold without going out. Or, the user can screen out the cell or the house source of the self-centering instrument through the house source information of the cell to be sold on the network, and the house source is seen from the broker on the spot.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a display method, a display device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides a display method, where the method includes: real cell images are collected and displayed in real time; determining at least one introduction information aiming at a target cell, wherein the target cell is a cell indicated by the real cell image; and displaying a first enhanced image on the displayed real cell image, wherein the first enhanced image is generated according to the at least one introduction information.
In a second aspect, an embodiment of the present disclosure provides a display device, including: the first display unit is used for acquiring and displaying images of a real cell in real time; a first determining unit, configured to determine at least one piece of introduction information for a target cell, where the target cell is a cell indicated by the real cell image; and a second display unit, configured to display a first augmented image on the displayed real cell image, where the first augmented image is generated according to the at least one piece of introduction information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; the image acquisition device is used for acquiring images; a storage device, configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the presentation method as described above in the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the presentation method as described above in the first aspect.
According to the display method, the display device and the electronic equipment, the real cell image is collected and displayed in real time, then at least one piece of introduction information of a target cell indicated by the real cell image is determined, and finally a first enhanced image generated according to the at least one piece of introduction information is displayed on the displayed real cell image; therefore, a new display mode can be provided, information for introducing the target cell (including objects in the target cell) can be displayed on the real cell image acquired in real time through the new display mode, a user can quickly know some basic conditions of the target cell, and the time cost of the user is saved. That is, the user can intuitively know the layout, the environmental condition, the supporting facilities and the like of the target cell.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow chart diagram illustrating one embodiment of a method in accordance with the present disclosure;
fig. 2A and 2B are schematic diagrams of an application scenario of a presentation method according to the present disclosure;
fig. 3A and 3B are schematic diagrams of an application scenario of a presentation method according to the present disclosure;
FIG. 4 is a schematic diagram of an application scenario of a presentation method according to the present disclosure;
FIG. 5 is a schematic structural diagram of one embodiment of a display device according to the present disclosure;
FIG. 6 is an exemplary system architecture to which the presentation method of one embodiment of the present disclosure may be applied;
fig. 7 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow of one embodiment of a presentation method according to the present disclosure is shown. The demonstration method as shown in fig. 1 comprises the following steps:
step 101, real cell images are collected and displayed in real time.
In this embodiment, an execution subject (for example, a terminal device) of the display method may acquire a real cell image in real time and display the real cell image.
In this embodiment, the execution subject may acquire a real-world image related to the target cell through a camera; and then, constructing a three-dimensional model of the real cell according to the real world image related to the target cell through the execution main body or a server end in communication connection with the execution main body.
In this embodiment, the executing entity may render the three-dimensional model of the target cell according to its pose (position and posture). As an example, a three-dimensional image rendering pipeline technology may be adopted to convert the three-dimensional model of the target cell into a two-dimensional image; and then displaying the two-dimensional image obtained by the conversion.
In this embodiment, the real cell image may include an image of an object inside the real cell. Here, the object image may be understood as an image of the outside of a building, an image of a lawn, an image of a real-world entrance, an image of fitness equipment, or the like.
At least one introduction information for the target cell is determined, step 102.
Here, the target cell may be understood as a cell indicated by the real cell image.
In this embodiment, the executing entity may determine the object in the target cell.
In some embodiments, the at least one introductory information for the target cell may be understood to include: introductory information for objects in the target cell and/or introductory information for the base case of the target cell.
For example: when the introduction information is introduction information for an object in the target cell, at least one of the introduction information may be understood as: information for describing a basketball court in the target cell, information for describing a badminton court in the target cell, information for describing vegetation in the target cell, and the like.
When the introduction information is introduction information for a basic situation of the target cell, at least one introduction information may be understood as: the information used for introducing the occupied area of the target cell, the information used for introducing the peripheral matching of the target cell, the information used for introducing the peripheral bus station or the site station of the target cell and the like.
Of course, at least one introduction information may also include: introduction information on objects in the target cell and introduction information for the basic situation of the target cell.
And 103, displaying a first enhanced image on the displayed real cell image.
In this embodiment, the executing subject may show the first augmented image on the displayed real cell image.
Here, the first enhanced image may be generated based on at least one introduction information. That is, it can be understood that: the first enhanced image may be used to introduce a target cell (including objects within the target cell)
In some embodiments, the first enhanced image may present the target cell in various ways, and is not limited herein. As an example, the first enhanced image may introduce the target cell at least by: indicating the entrance door of the target cell, indicating the fitness place, swimming pool, badminton court, etc. in the target cell, the opening and closing time of the entrance door of the target cell, the opening and closing time of the fitness place, swimming pool, badminton court, etc. can also be shown. It will be appreciated that the first enhanced image may be embodied in a different manner as the object is indicated.
It should be noted that, in the display method provided in this embodiment, the real cell image is collected and displayed in real time, then at least one piece of introduction information for the target cell indicated by the real cell image is determined, and finally, a first augmented image generated according to the at least one piece of introduction information is displayed on the displayed real cell image; therefore, a new display mode can be provided, information for introducing the target cell (including objects in the target cell) can be displayed on the real cell image acquired in real time through the new display mode, a user can quickly know some basic conditions of the target cell, and the time cost of the user is saved. That is, the user can intuitively know the layout, the environmental condition, the supporting facilities and the like of the target cell.
Please refer to fig. 2A and 2B, which illustrate an application scenario according to an embodiment of the presentation method of the present disclosure. In the application scenario, the small plum can make the camera of the terminal device face a certain position of the target cell, so that image acquisition of the certain position of the target cell can be realized. Then, as shown in fig. 2A, on the screen of the terminal device, a real cell image (as can be seen, including trees, lawns and house resources) can be presented. The terminal device may then determine at least one introductory message for the real cell image (target cell greening area, kind of tree, story height of house source, year of construction, total area of cell). Then, generating a first enhanced image by the at least one piece of introduction information, and displaying the first enhanced image on a screen of the terminal device; i.e., as shown in fig. 2B.
It should be noted that, in a simple schematic manner, the drawings in the present disclosure show relevant scenes related to the technical solution; however, the image displayed by the terminal in the real application scene is the real environment image, and therefore, the reality degree of the image in the drawings of the present disclosure is greatly different from that of the image displayed in the actual application.
It should be noted that the first enhanced image may include introduction information of a basic situation of the target cell, introduction information of an object in the target cell, and the like, so that when the user browses the target cell, the user can obtain information of an object situation in the target cell, a peripheral situation of the target cell, an opening time of a supporting facility in the target cell, and the like, thereby providing a determination basis for the user.
In other words, the user can only browse part of the scene of the target cell, and can know the approximate situation of the target cell, so that whether the target cell meets the expectation of the user can be judged, and therefore, whether to enter the process of continuously visiting the target cell can be further determined; and then the checking efficiency of the target cell can be improved, and the time of the user is saved.
In some application scenarios, the presentation form (the presentation manner or the presentation position of the first enhanced image) of the at least one introduction information may be set according to an actual situation, and is not limited herein.
In some embodiments, step 102 (determining at least one introduction information for the target cell) may specifically include:
and determining an object identifier corresponding to the object image in the real cell image based on the real cell image, and then determining the at least one piece of introduction information according to the corresponding relation table.
Here, the correspondence table may include: and the corresponding relation between the object identification and the introduction information.
In some embodiments, the real cell image may include object images of a plurality of objects, and each three-dimensional object in the three-dimensional model of the target cell may have a corresponding object identifier, and accordingly, a unique three-dimensional object may be determined from the object images, and the three-dimensional object may have a unique object identifier, so that the corresponding object identifier may be determined from the object images.
In some embodiments, since each object has a certain characteristic, the introduction information is used to introduce the characteristics of the objects. For example, when the object is a basketball court, the introduction information corresponding to the basketball court identifier may be 'on-off time of the basketball court, area of the basketball court, etc.' when the object is a lawn, the introduction information corresponding to the lawn identifier may be 'kind of grass or flower in the lawn, area of the lawn, etc.'.
Here, the execution body may be according to at least one of: and determining the object identification according to the characteristic information of the image of the real cell and the acquisition position of the acquisition equipment.
In some embodiments, the object images included in the real cell image can be known by performing feature extraction on the real cell image, so that the corresponding object identifier can be determined according to the object indicated by the object image. The characteristics of the object image of each object may have a certain difference, so that the characteristics of the real cell image are extracted, and the extracted characteristic information is analyzed, so that which object images the real cell image includes can be known, and the corresponding object identifier can be determined.
In some embodiments, some objects in the target cell may be repeated, for example: the unit buildings in the target cell may have similar characteristic information, so that in order to better determine that the user is collecting the unit building, the collecting position of the collecting device can be obtained, and the object identification can be determined by using the collecting position and the characteristic information extracted from the real cell image. That is, the object identification can be determined more accurately. For example: through the extracted feature information, the representation real cell image includes a building body image, however, a target cell may have many building bodies, and therefore, in order to accurately determine the building body indicated by the building body image in the real cell image, the acquisition position of the acquisition device may be acquired, for example: knowing that the acquisition equipment is positioned near the 3-span 2 units, so that the building body image in the real cell image can be determined to be the building body image of the 3-span 2 units, and the introduction information of the 3-span 2 units can be known through the identification of the 3-span 2 units; and may add the introductory information to at least one of the introductory information.
In some embodiments, step 102 (determining at least one introduction information for the real cell) may specifically include:
the method comprises the steps of determining the acquisition position of acquisition equipment, then determining an object identification set according to the acquisition position, determining a candidate introduction information set according to the object identification set, and then determining at least one piece of introduction information from the candidate introduction information set.
Here, the distance between the object indicated by the object identifier in the object identifier set and the acquisition position is smaller than a preset distance threshold. The preset distance threshold may be set according to actual conditions, for example: may be 10 meters; and are not limited thereto.
In some embodiments, after the acquisition position of the acquisition device is determined, the position of the user can be known according to the acquisition position of the acquisition device, so that the objects existing around the user can be determined, and a plurality of objects existing around the user can be obtained, and accordingly, the object identification set can be determined; and each object identification has introduction information corresponding to the object identification, so that a candidate introduction information set can be obtained.
In some embodiments, certain objects are not easily captured by the user, for example: the collection line of the user is collected along the road according to the walking road of the target cell. Thus, it may be that certain objects may not be acquired. Since some objects in the target cell are not acquired, introductory information about these objects may not be displayed. Such as: if the swimming pool is not collected by the user, the user may not know that the target cell also includes the swimming pool when browsing the target cell.
Therefore, in some embodiments, an acquisition position of the acquisition device may be determined, and after the acquisition position of the acquisition device is determined, a set of object identifications around the acquisition position may be determined, and then a set of candidate introduction information is obtained. That is, when the user collects the vicinity of the swimming pool, the introduction information for the swimming pool may be included in the candidate introduction information set. And the introductory information of the swimming pool can be determined as information within the at least one introductory information, so that the introductory information of the swimming pool can be presented.
In some embodiments, there may be more objects around some acquisition positions, that is, it may be understood that the number of object identifiers in the object identifier set is greater, and correspondingly, the number of introduction information in the candidate introduction information set is greater, which may cause the display interface of the execution subject not to display all introduction information at one time, and therefore, only part of the introduction information (at least one introduction information) may be selected from the candidate introduction information set for display. How to determine at least one introduction information from the candidate introduction information set is not limited, and only reasonable setting is needed according to actual conditions. For example: when the set of identifications comprises: badminton court sign, basketball court sign, 3a 2 unit signs, lawn sign etc. correspondingly, then can include in the candidate introduction information set: badminton court introduction information, basketball court introduction information, 3-unit 2-unit introduction information, lawn introduction information and the like. At this time, the badminton court introduction information and the basketball court introduction information can be determined as at least one introduction information and displayed. As shown in fig. 3A.
In fig. 3A, the display interface of the execution main body may further include a conversion control (a circular control at the bottom right corner in fig. 3A), and when the user performs a touch operation (a single click operation, a double click operation, a long press operation, etc.) on the conversion control, the content of the introduction information displayed on the display interface may be changed, that is, it may be understood that: other information (3 pieces of 2-unit introduction information, lawn introduction information and the like) in the introduction information set is determined to be at least one introduction information. As shown in fig. 3B.
In some embodiments, the determining at least one introduction information may include: in response to receiving a first trigger operation aiming at the real cell image, determining a first object image aiming at the first trigger operation, then determining a first object identifier corresponding to the first object image, and adding introduction information corresponding to the first object identifier into at least one introduction information.
That is, when the user wants to view the introduction information of an object in the real cell image, the first trigger operation may be performed on the object, so that the introduction information for the object may be displayed on the display interface.
In some embodiments, the target cell may be larger, so that in order to shorten the time for a user to browse the target cell, after the acquisition position of the acquisition device is obtained, a navigation path for the target object may be generated according to the acquisition position of the acquisition device and the geographic position of the target object; then, a second augmented image may be presented on the real cell image.
Here, the second enhanced image may be generated according to the navigation path.
Here, the target object may be an object within the target cell.
In some embodiments, the user may not know where the target object he or she wants to view is in the target cell, since the user may not know the layout of the target cell. Such as: if the user is satisfied with the basketball court of the target cell, the user may continue to browse the target cell.
Therefore, it can be seen that, by determining the navigation path for the target object, the user can quickly and accurately browse the object image which the user wants to browse, so that the user can quickly reach the target object and browse the target object image, and the user can quickly determine whether the target cell is the cell of the cardioscope.
In some embodiments, the object identifiers may be displayed on the display interface, as shown in fig. 4, the object identifiers of a plurality of objects are displayed on the display interface, and meanwhile, the user may perform a sliding operation in the display area to browse more object identifiers. When a user wants to browse a certain object, the user can select an object identifier corresponding to the object, and the object indicated by the selected object identifier is determined as a target object.
In some embodiments, the landmark object of the target cell may also be determined as the target object, for example, if the basketball court of the target cell is named, the basketball court of the target cell may be determined as the target object, and if the fitness equipment of the target cell is named, the fitness equipment of the target cell may be determined as the target object. In practical applications, there are many specific ways to determine the target object, and for the sake of brevity of the description, detailed descriptions are not repeated here.
In some embodiments, the user may want to know the distance between objects corresponding to two object images in the target cell image when browsing the target cell image (e.g., the user may want to know the distance between buildings), and therefore, the user may perform a ranging operation on the real cell image, may determine the second object image and the third object image indicated by the ranging operation when detecting the ranging operation for the real cell image, then determine the second geographic location of the second object indicated by the second object image, and determine the third geographic location of the third object indicated by the third image, and then may determine the geographic distance values of the second object and the third object according to the second geographic location and the third geographic location; and a third augmented image comprising a third distance may be presented on the presented real cell image.
Here, the third enhanced image includes a geographic distance value.
Here, the distance measuring operation may be 'double-click operation', 'slide operation', etc., and when the distance measuring operation is 'double-click operation', the object image touched by the first click of 'double-click operation' may be determined as the second object image, and the object image touched by the second click of 'double-click operation' may be determined as the third object image. Accordingly, when the distance measuring operation is a sliding operation, the object image touched by the start point of the sliding operation may be determined as a second object image, and the object image touched by the end point of the sliding operation may be determined as a third object image.
In some embodiments, when the user is browsing the target cell, the distance between two objects can be conveniently known, for example: the distance between two buildings, the distance between a swimming pool and a basketball court, and the like. Thereby being convenient for users to better know the target cell.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a display apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which may be applied in various electronic devices.
As shown in fig. 5, the display device of the present embodiment includes: the first display unit 501 is used for acquiring and displaying real cell images in real time; a first determining unit 502 for determining at least one introduction information for a target cell; the target cell is a cell indicated by the real cell image, and the second displaying unit 503 is configured to display a first augmented image on the displayed real cell image, where the first augmented image is generated according to the at least one piece of introduction information.
In some embodiments, the first determining unit 502 is specifically configured to determine, based on the real cell image, an object identifier corresponding to an object image in the real cell image; determining the at least one introduction information according to a corresponding relationship list, wherein the corresponding relationship list comprises: and the corresponding relation between the object identification and the introduction information.
In some embodiments, the first determining unit 502 is further specifically configured to determine the first threshold value according to at least one of the following: and determining the object identification according to the characteristic information of the real cell image and the acquisition position of the acquisition equipment.
In some embodiments, the first determining unit 502 is specifically further configured to determine an acquisition position of an acquisition device; determining an object identification set according to the acquisition position, wherein the distance between an object indicated by the object identification in the object identification set and the acquisition position is smaller than a preset distance threshold; determining a candidate introduction information set according to the object identification set; and determining the at least one introduction information from the candidate introduction information set.
In some embodiments, the above display apparatus further comprises: a second determining unit 504, configured to determine, in response to receiving a first trigger operation for the real cell image, a first object image for which the first trigger operation is directed; a third determining unit 505, configured to determine a first object identifier corresponding to the first object image; an adding unit 506, configured to add introduction information corresponding to the first object identifier to the at least one introduction information.
In some embodiments, an obtaining unit 507 for obtaining an acquisition location of an acquisition device; a generating unit 508, configured to generate a navigation path for a target object according to a collecting position of the collecting device and a geographic position of the target object, where the target object is an object in the target cell; the second display unit 503 is further specifically configured to display a second augmented image on the displayed real cell image, where the second augmented image is generated according to the navigation path.
In some embodiments, the presentation interface comprises: identifying an object; and a generating unit 508, further configured to determine, in response to receiving a selection operation for the object identifier, the object indicated by the selected object identifier as the target object.
In some embodiments, the ranging unit 509 is configured to determine, in response to detecting a ranging operation for the real cell image, a second object image and a third object image indicated by the ranging operation; determining a second geographical position of a second object indicated by the second object image, and determining a third geographical position of a third object indicated by the third image; determining a geographic distance value of a second object and a third object according to the second geographic position and the third geographic position; the second displaying unit 503 is specifically further configured to display a third augmented image on the displayed real cell image, where the third augmented image includes the geographic distance value.
Referring to fig. 6, fig. 6 illustrates an exemplary system architecture to which the presentation method of one embodiment of the present disclosure may be applied.
As shown in fig. 6, the system architecture may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 may be the medium used to provide communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 601, 602, 603 may interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client application in the terminal device 601, 602, 603 may receive the instruction of the user, and complete the corresponding function according to the instruction of the user, for example, add the corresponding information in the information according to the instruction of the user.
The terminal devices 601, 602, 603 may be hardware or software. When the terminal devices 601, 602, 603 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal device 601, 602, 603 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 605 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal devices 601, 602, and 603, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal devices 601, 602, 603.
It should be noted that the information processing method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the information pushing apparatus may be disposed in the terminal device 601, 602, 603. Furthermore, the information processing method provided by the embodiment of the present disclosure may also be executed by the server 605, and accordingly, an information processing apparatus may be provided in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 7, shown is a schematic diagram of an electronic device (e.g., a terminal device or a server of fig. 6) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device may include a processing device (e.g., central processing unit, graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage device 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM702, and the RAM703 are connected to each other via a bus 604. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication device 709 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: real cell images are collected and displayed in real time; determining at least one introduction information for a real cell; and displaying a first enhanced image on the displayed real cell image, wherein the first enhanced image is generated according to the at least one introduction information.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a cell does not in some cases constitute a limitation of the cell itself, for example, the first presentation unit 501 may also be described as a "cell that captures and presents images of real cells in real time".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A method of displaying, comprising:
real cell images are collected and displayed in real time;
determining at least one introduction information for a target cell, wherein the target cell is a cell indicated by the real cell image;
displaying a first augmented image on the displayed real cell image, wherein the first augmented image is generated according to the at least one introduction information.
2. The method of claim 1, wherein the determining at least one introduction information for the target cell comprises:
determining an object identifier corresponding to an object image in the real cell image based on the real cell image;
determining the at least one introduction information according to a corresponding relation list, wherein the corresponding relation list comprises: and the corresponding relation between the object identification and the introduction information.
3. The method according to claim 2, wherein the determining, based on the real cell image, an object identifier corresponding to an object image in the real cell image comprises:
according to at least one of: and determining the object identification according to the characteristic information of the real cell image and the acquisition position of the acquisition equipment.
4. The method of claim 1, wherein the determining at least one introduction information for the target cell comprises:
determining the acquisition position of acquisition equipment;
determining an object identification set according to the acquisition position, wherein the distance between an object indicated by the object identification in the object identification set and the acquisition position is smaller than a preset distance threshold value;
determining a candidate introduction information set according to the object identification set;
and determining the at least one introduction information from the candidate introduction information set.
5. The method of claim 1, further comprising:
in response to receiving a first trigger operation for the real cell image, determining a first object image for which the first trigger operation is directed;
determining a first object identification corresponding to the first object image;
adding introductory information corresponding to the first object identification to the at least one introductory information.
6. The method of claim 1, further comprising:
acquiring the acquisition position of acquisition equipment;
generating a navigation path for a target object according to the acquisition position of the acquisition equipment and the geographic position of the target object, wherein the target object is an object in the target cell;
displaying a second augmented image on the displayed real cell image, wherein the second augmented image is generated according to the navigation path.
7. The method of claim 6, wherein presenting the interface comprises: identifying an object; and
determining the target object according to the following manner:
in response to receiving a selection operation for the object identifier, determining the object indicated by the selected object identifier as the target object.
8. The method of claim 1, further comprising:
in response to detecting a ranging operation for the real cell image, determining a second object image and a third object image indicated by the ranging operation;
determining a second geographic location of a second object indicated by the second object image and determining a third geographic location of a third object indicated by the third image;
determining a geographic distance value of a second object and a third object according to the second geographic position and the third geographic position;
displaying a third augmented image on the displayed real cell image, wherein the third augmented image includes the geographic distance value.
9. A display device, comprising:
the first display unit is used for acquiring and displaying images of a real cell in real time;
a first determining unit, configured to determine at least one piece of introduction information for a target cell, where the target cell is a cell indicated by the real cell image;
a second presentation unit, configured to present a first augmented image on the presented real cell image, where the first augmented image is generated according to the at least one introduction information.
10. An electronic device, comprising:
one or more processors;
the image acquisition device is used for acquiring images;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202010509733.3A 2020-06-05 2020-06-05 Display method and device and electronic equipment Pending CN111710017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010509733.3A CN111710017A (en) 2020-06-05 2020-06-05 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010509733.3A CN111710017A (en) 2020-06-05 2020-06-05 Display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111710017A true CN111710017A (en) 2020-09-25

Family

ID=72539180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010509733.3A Pending CN111710017A (en) 2020-06-05 2020-06-05 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111710017A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308780A (en) * 2020-10-30 2021-02-02 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN114516870A (en) * 2022-02-10 2022-05-20 五邑大学 Triazolo hexa-nitrogen heterocyclic-3-amine compound and preparation method and application thereof
CN114527899A (en) * 2020-10-30 2022-05-24 北京中地泓科环境科技有限公司 Method for displaying environmental information based on picture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282220A1 (en) * 2013-03-14 2014-09-18 Tim Wantland Presenting object models in augmented reality images
CN106291519A (en) * 2015-06-05 2017-01-04 小米科技有限责任公司 Distance-finding method and device
CN108304067A (en) * 2018-01-26 2018-07-20 百度在线网络技术(北京)有限公司 System, method and apparatus for showing information
CN108572969A (en) * 2017-03-09 2018-09-25 阿里巴巴集团控股有限公司 The method and device of geography information point recommended information is provided
CN109685905A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Cell planning method and system based on augmented reality
US20190318404A1 (en) * 2018-04-11 2019-10-17 Trivver, Inc. Systems and methods for presenting information related to products or services being shown on a second display device on a first display device using augmented reality technology
CN110487262A (en) * 2019-08-06 2019-11-22 Oppo广东移动通信有限公司 Indoor orientation method and system based on augmented reality equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140282220A1 (en) * 2013-03-14 2014-09-18 Tim Wantland Presenting object models in augmented reality images
CN106291519A (en) * 2015-06-05 2017-01-04 小米科技有限责任公司 Distance-finding method and device
CN108572969A (en) * 2017-03-09 2018-09-25 阿里巴巴集团控股有限公司 The method and device of geography information point recommended information is provided
CN109685905A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Cell planning method and system based on augmented reality
CN108304067A (en) * 2018-01-26 2018-07-20 百度在线网络技术(北京)有限公司 System, method and apparatus for showing information
US20190318404A1 (en) * 2018-04-11 2019-10-17 Trivver, Inc. Systems and methods for presenting information related to products or services being shown on a second display device on a first display device using augmented reality technology
CN110487262A (en) * 2019-08-06 2019-11-22 Oppo广东移动通信有限公司 Indoor orientation method and system based on augmented reality equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308780A (en) * 2020-10-30 2021-02-02 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN114527899A (en) * 2020-10-30 2022-05-24 北京中地泓科环境科技有限公司 Method for displaying environmental information based on picture
CN114527899B (en) * 2020-10-30 2024-05-24 北京中地泓科环境科技有限公司 Method for displaying environment information based on drawing
CN114516870A (en) * 2022-02-10 2022-05-20 五邑大学 Triazolo hexa-nitrogen heterocyclic-3-amine compound and preparation method and application thereof

Similar Documents

Publication Publication Date Title
CN111710017A (en) Display method and device and electronic equipment
CN112488783B (en) Image acquisition method and device and electronic equipment
CN111309240B (en) Content display method and device and electronic equipment
CN111597466A (en) Display method and device and electronic equipment
CN111784712B (en) Image processing method, device, equipment and computer readable medium
CN110930220A (en) Display method, display device, terminal equipment and medium
US20240168605A1 (en) Text input method and apparatus, and electronic device and storage medium
CN111597465A (en) Display method and device and electronic equipment
CN108074009A (en) Motion route generation method and device, mobile terminal and server
CN112257582A (en) Foot posture determination method, device, equipment and computer readable medium
CN111652675A (en) Display method and device and electronic equipment
CN114417782A (en) Display method and device and electronic equipment
CN114416259A (en) Method, device, equipment and storage medium for acquiring virtual resources
CN111586295B (en) Image generation method and device and electronic equipment
CN103198750A (en) Method and apparatus for displaying digital map in client
CN112183388A (en) Image processing method, apparatus, device and medium
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN111597414B (en) Display method and device and electronic equipment
CN114925680A (en) Logistics interest point information generation method, device, equipment and computer readable medium
CN114417214A (en) Information display method and device and electronic equipment
CN111835917A (en) Method, device and equipment for showing activity range and computer readable medium
CN110619089B (en) Information retrieval method and device
CN111563797A (en) House source information processing method and device, readable medium and electronic equipment
CN111931044A (en) Information display method and device and electronic equipment
CN111696214A (en) House display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination