CN111680191A - Information display method, device, equipment and storage device - Google Patents

Information display method, device, equipment and storage device Download PDF

Info

Publication number
CN111680191A
CN111680191A CN202010441949.0A CN202010441949A CN111680191A CN 111680191 A CN111680191 A CN 111680191A CN 202010441949 A CN202010441949 A CN 202010441949A CN 111680191 A CN111680191 A CN 111680191A
Authority
CN
China
Prior art keywords
target object
information
picture
stored information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010441949.0A
Other languages
Chinese (zh)
Inventor
柯松佃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010441949.0A priority Critical patent/CN111680191A/en
Publication of CN111680191A publication Critical patent/CN111680191A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/7854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using shape
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information display method, an information display device, equipment and a storage device. The method comprises the following steps: acquiring a first picture obtained by scanning a target object by a camera, and displaying the first picture; searching prestored information associated with the target object in the first picture; and displaying the pre-stored information on a preset position of the target object in the first picture. By the method, the information of the target object can be quickly checked, and the information searching time of a user is saved.

Description

Information display method, device, equipment and storage device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an information display method, apparatus, device, and storage apparatus.
Background
In daily life, information which people need to remember is many, for example, for a bank card, a password of the bank card needs to be remembered; for books rented from a library, it is necessary to remember the date of return, and so on. Obviously, such a large amount of information cannot be memorized by the human brain alone.
At present, people usually write information on a memo to facilitate memory, and only need to look up the memo when needed. However, in this way, when there are many memo information, people need to search out the information they want from a large amount of information, which takes a lot of time.
Disclosure of Invention
The application provides an information display method, device, equipment and storage device.
A first aspect of the present application provides an information display method. The method comprises the following steps: acquiring a first picture obtained by scanning a target object by a camera, and displaying the first picture; searching prestored information associated with the target object in the first picture; and displaying the pre-stored information on a preset position of the target object in the first picture.
Therefore, after the first picture obtained by scanning the target object by using the camera is displayed, the pre-stored information associated with the target object in the first picture is searched, and after the pre-stored information is searched, the pre-stored information can be displayed on the preset position of the target object in the first picture. Namely, when the user needs to check the related information of the target object, the information can be checked by scanning the target object by using the camera, so that the information of the target object can be quickly checked, and the information searching time of the user is saved.
Wherein, the above method further comprises: acquiring a second picture obtained by scanning the target object by the camera; displaying an information input box and receiving information input by a user through the information input box; the information is saved as pre-stored information associated with the target object in the second frame.
Therefore, a second picture obtained by scanning the target object through the camera is acquired, the information input box is displayed, information input by the user through the information input box is received, and the information is stored as prestored information associated with the target object in the second picture. Therefore, the information input by the user can be used as the associated information of the target object in the second picture, and the user can conveniently view the information in the future.
Wherein, the above-mentioned first picture that obtains the camera and scan the target object, or obtain the second picture that the camera scans the target object, include: and acquiring a picture obtained by scanning a preset part or a label object on the target object by the camera. The above-described saving of the information as the prestored information associated with the target object in the second screen includes: and extracting the characteristic data of the preset part or the tag object of the target object from the second picture, and storing the information as pre-stored information associated with the extracted characteristic data. The searching for the pre-stored information associated with the target object in the first frame includes: and extracting the characteristic data of the preset part or the tag object of the target object from the first picture, and searching prestored information associated with the extracted characteristic data.
Therefore, by acquiring a picture obtained by scanning the preset part or the label object on the target object by the camera and further extracting the characteristic data of the preset part or the label object of the target object from the obtained picture, the associated pre-stored information can be directly searched by the extracted characteristic data without scanning the whole target object. Or the received information input by the user through the information input box is stored and serves as prestored information associated with the extracted characteristic data, so that the information is stored as information associated with the preset part or the label object on the target object, and the prestored information can be directly searched through the characteristic data of the preset part or the label object in the future.
The label object is a preset pattern arranged on a target object.
Therefore, when the label object is the preset pattern arranged on the target object, the user can search the associated pre-stored information or directly associate the pre-stored information with the preset pattern by scanning the preset pattern arranged on the target object, so that the user can directly scan the pre-stored information which can be searched by the preset pattern.
After the pre-stored information is displayed at the preset position of the target object in the first frame, the method further includes: and if the position of the target object in the first picture is detected to be changed, adjusting the display position of the pre-stored information in the first picture so as to keep the pre-stored information at the preset position of the target object in the first picture.
Therefore, whether the position of the target object in the first picture is changed or not is detected, and then the display position of the pre-stored information in the first picture is adjusted, so that the pre-stored information can be always kept at the preset position of the target object in the first picture, and a user can clearly determine that the pre-stored information is associated with the preset position.
Before the displaying the pre-stored information on the preset position of the target object in the first frame, the method further includes: acquiring information to be verified input by a user; and verifying the information to be verified, and displaying the pre-stored information on a preset position of the target object in the first picture when the verification is passed. Or before acquiring the first picture scanned by the camera on the target object, the method further includes: acquiring information to be verified input by a user; and verifying the information to be verified, starting the camera when the verification is passed, and executing to acquire a first picture obtained by scanning the target object by the camera.
Therefore, the verification process is added before the pre-stored information is displayed on the preset position of the target object in the first picture, so that even if a user without authority can scan the target object, the user can not see the pre-stored information, the safety of the information is improved, and the privacy of the user can be protected. Or before the first picture that obtains through directly utilizing the camera to scan the target object, join the step of verifying, only under the circumstances that the verification passes, can start the camera and scan the target object for verify that users who do not pass can't start the camera, also can't learn whether the target object stores the information of prestoring, further improved the security of information.
The information to be verified comprises at least one of a password and a user biological characteristic.
Therefore, the password or the biological characteristic input by the user can be directly received to verify whether the user has the right.
Wherein, the above-mentioned target object includes the borrowed article, and the prestored information associated with the borrowed article includes: borrowing at least one of the serial number of the article, the borrower, the borrowing period, the return time and the return place; and/or the target object comprises purchased commodities, and the prestored information related to the purchased commodities comprises at least one of purchase time, warranty date and maintenance telephone of the purchased commodities; and/or, the target object comprises an electronic card and a bankbook, and the prestored information associated with the electronic card/the bankbook comprises at least one of a password, account opening person information, residual amount and validity period; and/or, the target object comprises a photograph, and the pre-stored information associated with the photograph comprises: at least one of a person in the photograph, a photographing time, and a photographing place; and/or, the target object comprises a certificate, and the pre-stored information associated with the certificate comprises: at least one of the time of issue, the place of issue, and the validity period.
A second aspect of the present application provides an information display apparatus, the apparatus comprising: the acquisition module is used for acquiring a first picture obtained by scanning a target object by a camera and displaying the first picture; the searching module is used for searching prestored information associated with the target object in the first picture; and the display module is used for displaying the pre-stored information on the preset position of the target object in the first picture.
Therefore, by the aid of the information display device, the target object can be scanned by the camera to check information, the information of the target object can be quickly checked, and information searching time of a user is saved.
A third aspect of the present application provides an information display apparatus comprising a processor and a memory coupled to the processor, a camera and a display, wherein the display is used for the processor to execute a computer program stored in the memory, so as to perform the information display method described in the first aspect in conjunction with the camera and the display.
Therefore, the information display equipment can scan the target object by using the camera to check the information, can quickly check the information of the target object, and saves the information searching time of a user. Or a camera may be used to scan the target object to store pre-stored information associated with the target object.
A storage device according to a fourth aspect of the present application stores a computer program executable by a processor, the computer program being for executing the information display method described in the first aspect.
Therefore, the storage device can be used for storing the information display method, so that a user can quickly check the information of the target object by operating the method, and the information search time of the user is saved. Or a camera may be used to scan the target object to store pre-stored information associated with the target object.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings required in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of an information display method according to the present application;
FIG. 2 is a flowchart illustrating a second embodiment of an information displaying method according to the present application
FIG. 3 is a schematic flow chart diagram illustrating a third embodiment of an information display method according to the present application;
FIG. 4 is a schematic flow chart diagram of an embodiment of an information display device according to the present application;
fig. 5 is a block diagram schematically illustrating a structure of an embodiment of an information display device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The Augmented Reality (AR) technology can apply virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after analog simulation, and the two kinds of information complement each other, thereby realizing the 'enhancement' of the real world. Through long-term research, the applicant finds that no method for viewing custom information associated with the existence of a specified real object by utilizing an AR technology exists at present. In order to improve or solve the above technical problem, the present application proposes at least the following embodiments:
referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the information display method of the present application.
The embodiment specifically comprises the following steps:
step S110: and acquiring a first picture obtained by scanning the target object by the camera, and displaying the first picture.
In the application, the electronic device with the camera can be used for scanning the first picture obtained by the target object. The camera may be integrated within the electronic device as part of the electronic device. Or as an external component, to be connected to the electronic device. The target object may be a real-world real object, i.e. a tangible object. The first screen may be displayed on the display, for example, the first screen may be displayed on a screen of the electronic device itself, or the first screen may be transmitted to another display by the electronic device and displayed on the other display.
When the target object is scanned by the camera, a part of the target object may be scanned, or the entire target object may be scanned. For example, when the target object is a refrigerator, one side surface of the refrigerator may be scanned, or the entire refrigerator may be scanned.
After the target object is scanned by the camera, a corresponding picture is obtained, and at this time, the obtained picture can be defined as a first picture. When a part of the target object is scanned, a first frame including the part of the target object is obtained. When the whole target object is scanned, a first picture containing the whole target object is obtained. When the user moves the camera, the area scanned by the camera changes, and at this time, the first picture also changes. For example, when the area scanned by the camera changes from one side of the refrigerator to the other, the first screen will also switch from one side to the other accordingly. That is, the first frame reflects the frame obtained after the camera scans in real time.
In some embodiments, the first image obtained after scanning the target object does not change with the movement of the camera, i.e. the first image is fixed. For example, a first picture is obtained after scanning one side surface of the refrigerator, and the first picture does not change with the movement of the camera.
In some embodiments, the electronic device may identify the target object during the scanning of the target object, and automatically stop scanning when the electronic device identifies the target object. The identification is, for example, establishing an identification mark for the target object, and the established identification mark corresponds to the target object. I.e. one target object corresponds to one identification mark. By establishing the identification for the target object, contact with the target object can be established through the identification. For example, when the target object is a refrigerator, after the refrigerator is identified, a corresponding identification AAAA is established. Then, the refrigerator id is AAAA.
In some embodiments, the process of scanning may also be for a period of time. The user may end the scan by entering an instruction, or may end the scan automatically after a period of time. If the electronic device still completes the process of identifying the target object when the scanning is finished, the user can be reminded to continue scanning so as to complete the identification of the target object.
In some embodiments, the first frame may also be a frame directly selected by the user, and then an object in the frame is used as the target object, without scanning the real object with the camera. For example, the user may directly select an existing picture as the first screen, and then use an object in the picture as the target object, for example, use a refrigerator in the picture as the target object.
In some implementation scenarios, the number of the target objects in the first frame may be one or several, that is, several target objects may be scanned.
Furthermore, the scanning can be performed by scanning a preset part or a label on the target object, and the whole target object does not need to be scanned. The method can be realized by the following steps:
step S1101: and acquiring a picture obtained by scanning a preset part or a label object on the target object by the camera.
The predetermined location on the target object may be any location on the target object. For example, when the target object is a table lamp, the preset portion may be a lamp holder of the table lamp, an adjusting knob, a light emitting portion, or the like. The label on the target object can be a trademark, a two-dimensional code or other patterns, characters, symbols and the like attached to the target object. For example, when the target object is a table lamp, the tag may be a trademark of the table lamp, a two-dimensional code attached to the table lamp, or other patterns, characters, symbols, etc. attached to the table lamp by the user.
In some embodiments, the tag is a predetermined pattern disposed on the target object, such as a pattern that the user draws on or attaches to the target object.
When scanning a preset part or a label on a target object, the camera may be the whole of the scanning target object, and the whole may include the preset part or the label on the target object. It is also possible to scan only a predetermined portion or a label on the target object without scanning the entire target object. For example, the trademark on the table lamp may be directly scanned without scanning the whole table lamp to scan the trademark of the table lamp.
Step S111: and searching prestored information associated with the target object in the first picture.
After the target object is scanned, if there is pre-stored information associated with the target object, the pre-stored information can be viewed using the target object in the first frame. The pre-stored information may be stored by the user through related operations, may be stored on the electronic device, and may also be stored in the cloud server.
In this embodiment, the process of searching for the pre-stored information associated with the target object in the first picture may be to search for an association relationship existing in the target object according to the target object in the first picture, and then search for the pre-stored information according to the association relationship.
The association relationship may be a corresponding relationship between the target object and the pre-stored information, for example, the target object a corresponds to the pre-stored information a, and the target object B corresponds to the pre-stored information B. Of course, one target object may correspond to a plurality of pieces of pre-stored information, for example, the target object a corresponds to the pre-stored information a1, a2, a 3.
When the association relation is stored in the storage medium of the electronic device, the association relation existing on the target object can be directly searched in the electronic device, and the pre-stored information can be searched through the association relation. When the association relationship is stored in the cloud server, the association relationship existing on the target object can be searched in the cloud server to search the pre-stored information.
In some embodiments, the target object in the first frame may also be identified, and then the pre-stored information may be searched. For example, the target object is identified to obtain the identification of the target object, and the identification is used to search the pre-stored information associated with the identification. For example, when the target object is a refrigerator, the refrigerator is identified to obtain that the identification mark is AAAA, and the prestored information corresponding to the identification mark can be searched according to the identification mark AAAA. In this implementation scenario, the pre-stored information is substantially associated with the identification of the target object.
In some embodiments, the target object may be a borrowed article, and the pre-stored information associated with the borrowed article may include: borrowing at least one of the number of the article, the borrower, the borrowing deadline, the return time and the return location.
In some embodiments, the target object may be a purchased article, and the pre-stored information associated with the purchased article includes at least one of a purchase time, a warranty date, and a service call for the purchased article.
In some embodiments, the target object may be an electronic card, a passbook, and the pre-stored information associated with the electronic card/passbook includes at least one of a password, account opening person information, remaining amount, and expiration date.
In some embodiments, the target object may be a photograph, and the pre-stored information associated with the photograph includes: at least one of a person in the photograph, a time of the photograph, and a place of the photograph.
In some embodiments, the target object may be a certificate, and the pre-stored information associated with the certificate includes: at least one of the time of issue, the place of issue, and the validity period.
Corresponding to the step S1101, when the preset portion or the label object on the target object is scanned, the pre-stored information can be searched according to the scanned picture, and the step S111 can be specifically completed through the following steps:
step S1111: and extracting the characteristic data of the preset part or the tag object of the target object from the picture, and searching prestored information associated with the extracted characteristic data.
After scanning the preset portion or the tag object on the target object, in order to establish the association between the preset portion or the tag object and the pre-stored information, the feature data of the preset portion or the tag object of the target object may be extracted from a picture obtained by scanning the preset portion or the tag object on the target object. The characteristic data may be a characteristic of the predetermined portion or the label, and the characteristic may be a shape, a size, a color, or other physical characteristics of the predetermined portion or the label, or a combination thereof. For example, when the predetermined portion is a refrigerator door, the characteristic data may be the size, shape, color, and other data of the refrigerator door or a combination thereof. When the predetermined portion is a display screen, the characteristic data may be pixel density, color level, shape, size, etc. of the display screen. When the label is a trademark, the size, shape, color, or other physical characteristics of the trademark and combinations thereof may also be characteristic data.
After the feature data are extracted, the pre-stored information associated with the feature data can be searched according to the extracted feature data. For example, when the target object is a display screen, the extracted feature data is pixel density, gradation, shape, size of the display screen. The associated pre-stored information can then be looked up based on these characteristic data. The searching process may be searching according to an association relationship, specifically, an association relationship between the feature data and the pre-stored information. For example, the characteristic data a corresponds to pre-stored information a, and the characteristic data B corresponds to pre-stored information B. Of course, one feature data may correspond to a plurality of pieces of pre-stored information, for example, the target object a corresponds to the pre-stored information a1, a2, a 3.
Step S112: and displaying the pre-stored information on a preset position of the target object in the first picture.
After the pre-stored information is found, the pre-stored information can be displayed, for example, on a preset position of the target object in the first frame. The preset position may be any part of the target object, for example, the target object is a refrigerator, and the preset position may be a refrigerator door, two sides of the refrigerator, and the like. The preset position and the pre-stored information of the target object are both located at the same position in the first picture, for example, the pre-stored information and the preset position seen by the user are displayed in a stacked manner. The setting of the preset position is not limited as long as it can be seen by the user.
By the method, the target object can be scanned, and the pre-stored information associated with the target object can be searched by utilizing the target object in the first image obtained by scanning. The user can check the pre-stored information on the target object by scanning the target object, help the user to record information by using a real object, quickly check the information of the target object and save the information searching time of the user. In addition, the preset part or the label object on the target object can be directly scanned to extract the characteristic data of the preset part or the label object, and the associated pre-stored information can be searched by utilizing the characteristic data. The preset part or the label object can be directly scanned to search the pre-stored information, and the pre-stored information does not need to be searched by scanning the target object.
After the pre-stored information is displayed on the preset position of the target object in the first picture, if the user moves the electronic device, it means that the area scanned by the camera changes. At this time, the position of the target object in the first frame will also change accordingly. In such a case, the following steps may be continued: and if the position of the target object in the first picture is detected to be changed, adjusting the display position of the pre-stored information in the first picture so as to keep the pre-stored information at the preset position of the target object in the first picture.
The position of the target object in the first picture is changed when the user moves the electronic device or the position of the target object changes. For example, when the user moves the electronic device to the left, the target object moves from the original position to the right in the first screen. For example, when the user moves the electronic device to the left, the position of the target object in the first frame will move to the right and finally fall to the right of the first frame. When the position of the target object in the first picture is changed, the pre-stored information is displayed on the preset position of the target object. Therefore, in order to keep the pre-stored information to be displayed at the preset position when the position of the target object in the first frame changes, the display position of the pre-stored information in the first frame may be adjusted to keep the pre-stored information at the preset position of the target object in the first frame. For example, when the preset position moves from the middle of the first frame to the right, the pre-stored information also moves from the middle to the right. From the perspective of the user, it can be understood that the pre-stored information is fixed with the preset position of the target object, and the position where the pre-stored information is displayed changes with the change of the preset position.
It can be understood that when the target object disappears on the first frame due to the position change, the pre-stored information also disappears on the first frame at the same time. At this time, an indication mark may be displayed in the first screen to indicate that the user has the pre-stored information in a certain direction of the first surface, so that the user can conveniently view the pre-stored information. For example, when the target object moves to the left in the first frame and finally disappears on the first frame, an arrow pointing to the right may be displayed in the first frame to prompt the user that there is pre-stored information on the right side of the first frame at that time.
The display of the pre-stored information is set to be changed along with the change of the preset position of the target object in the first picture, so that the correspondence between the pre-stored information and the preset position can be more visual. For example, the target object is the refrigerator, and when presetting the position and being the refrigerator door, if prestore the maintenance condition that the information is about the refrigerator door, then through setting up the information of prestoring into the demonstration at the refrigerator door, the user can directly obtain the refrigerator door and prestore the corresponding relation of information, and can not because prestore the information display in other places, lead to the possible misconnection of user.
Referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of the information display method of the present application. In order to improve the security of the information and reduce the possibility of information leakage, before step S110, that is, before scanning the target object with the camera to obtain the first picture, an authentication step may be performed first, so that a person who fails to pass the authentication cannot scan the target object with the camera, and thus cannot know whether the target object has the associated pre-stored information, thereby ensuring the privacy of the user. This can be achieved, for example, by the following steps:
step S108: and acquiring information to be verified input by a user.
Before the information to be verified input by the user is obtained, the electronic device may prompt the user to input the information to be verified, for example, an input box and a corresponding text prompt are displayed on a screen of the electronic device, or a pattern is displayed, so that the user can input the information to be verified conveniently. The information to be authenticated may include at least one of a password and a user biometric. The biometric features include, for example, fingerprints, irises, faces or voices, etc.
The method for acquiring the information to be verified may be to detect the content input by the user, or acquire fingerprint data through a fingerprint module, or acquire face data through a camera, or collect voice data through a microphone, and the like.
In a specific implementation scenario, when the information to be verified input by the user needs to be verified, the corresponding verification information is already stored before the verification is needed, so that after the information to be verified input by the user is obtained, the information to be verified and the verification information can be compared. Therefore, when the user inputs the information to be verified for the first time, the information to be verified obtained in the step is stored as verification information, so that comparison can be conveniently carried out after the information to be verified input by the user is obtained next time. The storage verification information can be stored in a storage medium of the electronic device, and can also be stored in the cloud.
Step S109: and verifying the information to be verified, and starting the camera when the verification is passed.
After the information to be verified is obtained, the information to be verified can be used for verification, and whether a user has the authority to use the camera to scan a target object or not is judged. The process of verifying the information to be verified can be completed on the electronic device, or the electronic device sends the information to be verified to the cloud server, and after verification by the cloud server, the verification result is sent to the electronic device. For example, the verification is performed by comparing the information to be verified input by the user with the verification information stored in the local or cloud of the electronic device, and if the comparison is consistent, the verification is passed. For example, when the acquired information to be verified is a fingerprint, the corresponding verification information is the fingerprint data of the user that is stored before. When the comparison result between the acquired fingerprint and the previously stored fingerprint is consistent, the verification is considered to be passed.
When the authentication is passed, that means the user has the right to use the camera, step S110 may be executed to scan the target object by image capture to obtain a first picture.
In another specific implementation scenario, in order to prevent the pre-stored information from being viewed by a person without authority, a step of verification may be added before the pre-stored information is displayed at the preset position of the target object in the first picture in step S112, and in case that the verification fails, even if the pre-stored information is found, the pre-stored information is not displayed, so that the information is not leaked. For example, this can be achieved by: acquiring information to be verified input by a user; and verifying the information to be verified, and displaying the pre-stored information on a preset position of the target object in the first picture when the verification is passed. The obtaining of the information to be verified may refer to step S108, and the process of verifying the information to be verified may refer to the verification process of step S109, which is not described herein again. However, this step is executed when the verification is passed, that is, step S112 is executed to display the pre-stored information on the preset position of the target object in the first screen.
In order to view the pre-stored information, the pre-stored information may be saved by performing the saving as information associated with the target object. Based on this, the present application provides a method for associating virtual information with a physical object, which may specifically include the following steps:
referring to fig. 3, fig. 3 is a schematic flow chart of a third embodiment of the information display method of the present application.
Step S210: and acquiring a second picture obtained by scanning the target object by the camera.
The scanning process may specifically refer to step S110, except that the second frame is obtained in this step.
Further, the target object may be scanned with respect to a preset portion or a label on the target object. At this time, step S210 may include the steps of:
step S2101: and acquiring a picture obtained by scanning a preset part or a label object on the target object by the camera.
See step S1101 specifically.
Step S211: and displaying the information input box and receiving the information input by the user through the information input box.
After the second frame is obtained, the virtual information can be self-defined by using the target object in the second frame. At this time, the electronic device may display an information input box to receive information input by the user through the information input box. The information input box may be displayed on the second screen or on another interface. The user can input the information that the user wants to save through the displayed information input box. The type of the information can be characters, patterns, pictures, sound, video screens and the like.
When the information input box is displayed in the second screen, the information input box may be displayed on the target object. For example, when the target object is a refrigerator, the information input box is displayed on the refrigerator in the second screen. Of course, the information input box may also be displayed on other interfaces, and the target object may also be displayed on other interfaces at the same time, so that the user may clearly know to which object the input information is input.
Corresponding to the step S2011, when the camera scans the preset portion on the target object or the label object, the step S211 may specifically include the following steps:
step S2111: and displaying the information input box and receiving the information input by the user through the information input box.
The information input box may be displayed in a picture obtained after scanning a preset portion or a label object on the target object, or may be displayed in other interfaces.
When the information input box is displayed in the obtained picture, the information input box can be displayed in a preset position on the target object or in the label object. For example, when the target object is a refrigerator, the preset portion is a refrigerator door of the refrigerator, and the information input frame may be displayed on the refrigerator door. When the label object is a trademark on the refrigerator, the information input box may be displayed on the trademark.
When the information input box is displayed on other interfaces, other interfaces can also simultaneously display the preset part or the label object on the target object, so that the user can clearly know which object the input information is input aiming at.
Step S212: the information is saved as pre-stored information associated with the target object in the second frame.
After receiving the information input by the user, the information can be stored and used as the pre-stored information associated with the target object in the second picture, so that the pre-stored information can be viewed when the target object is scanned by the camera next time. The pre-stored message is, for example, XXXX year XX month XX day, and XXX is purchased at XXX location, at a price of XXX.
When the information input by the user is saved, the information can be saved in a storage medium of the electronic device, or can be saved in a cloud server.
Since the information input by the user is input for the target object, the information can be taken as pre-stored information associated with the target object in the second screen. The pre-stored information is associated with the target object in the second frame, which may be understood as that the pre-stored information establishes an association relationship with the target object, and the association relationship may be a corresponding relationship between the target object and the pre-stored information, for example, the target object a corresponds to the pre-stored information a, and the target object B corresponds to the pre-stored information B. Of course, one target object may correspond to a plurality of pieces of pre-stored information, for example, the target object a corresponds to the pre-stored information a1, a2, a 3.
In some implementations, multiple associated pre-stored information may be stored based on the same target object. For example, when the target object is a refrigerator, the number of pieces of pre-stored information associated with the existence of the refrigerator may be several.
Corresponding to the step S2111, if the preset portion or the tag object on the target object is scanned, the step may further include the steps of:
step S2121: and extracting the characteristic data of the preset part or the label object of the target object from a picture obtained by scanning the preset part or the label object on the target object, and storing the information as prestored information associated with the extracted characteristic data.
The process of extracting the feature data of the preset portion of the target object or the tag object from the frame obtained by scanning the preset portion of the target object or the tag object may refer to step S1111.
After the feature data of the preset part or the tag object of the target object is extracted, the information input by the user can be stored and used as the pre-stored information to be associated with the feature data. Namely, the pre-stored information can be found through the characteristic data.
In some implementations, there may be a plurality of associated pre-stored information based on the feature data of the predetermined portion or tag of the same target object. For example, when the preset portion of the target object is a refrigerator door of a refrigerator, the number of pieces of pre-stored information associated with the characteristic data of the refrigerator door may be several.
In some embodiments, the target object may be a borrowed article, and the pre-stored information associated with the borrowed article may include: borrowing at least one of the number of the article, the borrower, the borrowing deadline, the return time and the return location.
In some embodiments, the target object may be a purchased article, and the pre-stored information associated with the purchased article includes at least one of a purchase time, a warranty date, and a service call for the purchased article.
In some embodiments, the target object may be an electronic card, a passbook, and the pre-stored information associated with the electronic card/passbook includes at least one of a password, account opening person information, remaining amount, and expiration date.
In some embodiments, the target object may be a photograph, and the pre-stored information associated with the photograph includes: at least one of a person in the photograph, a time of the photograph, and a place of the photograph.
In some embodiments, the target object may be a certificate, and the pre-stored information associated with the certificate includes: at least one of the time of issue, the place of issue, and the validity period.
According to the method, the second picture obtained by scanning the target object by using the camera is utilized, and further the target object in the second picture is associated with the information input by the user, so that the user can add self-defined and saved information which the user wants to store aiming at the target object, and the user can view the information at a later date. In addition, the image obtained by scanning the preset part or the label object on the target object by the camera can be further obtained, corresponding characteristic data is further extracted, and information input by the user is associated with the characteristic data, so that the user can directly scan the preset part or the label object on the target object in the future and can view the prestored information.
Referring to fig. 4, fig. 4 is a first embodiment of an information display device according to the present application, the information display device including: an acquisition module 410, a search module 411 and a display module 412. The obtaining module 410 is configured to obtain a first image obtained by scanning the target object with the camera, and display the first image. The searching module 411 is used for searching the pre-stored information associated with the target object in the first picture. The display module 412 is configured to display the pre-stored information on a preset position of the target object in the first frame.
Optionally, the information display device further includes a storage module, where the storage module is configured to obtain a second picture obtained by scanning the target object with the camera; displaying an information input box and receiving information input by a user through the information input box; saving the information as pre-stored information associated with the target object in the second frame.
Optionally, the acquiring module 410 executes the acquiring of the first picture obtained by scanning the target object by the camera, or the storing module executes the acquiring of the second picture obtained by scanning the target object by the camera, which specifically includes: and acquiring a picture obtained by scanning a preset part or a label object on the target object by the camera. The step of executing, by the saving module, to save the information as the pre-stored information associated with the target object in the second screen specifically includes: and extracting the characteristic data of the preset part or the label object of the target object from the picture, and storing the information as prestored information associated with the extracted characteristic data. The searching module 411 may further be configured to extract feature data of a preset portion of the target object or the tag object from the image, and search for pre-stored information associated with the extracted feature data.
Optionally, the label is a preset pattern arranged on the target object.
Optionally, the information display apparatus further includes an adjusting module, where the adjusting module is configured to adjust a display position of the pre-stored information in the first picture if it is detected that the position of the target object in the first picture changes, so that the pre-stored information is kept at a preset position of the target object in the first picture.
Optionally, the information display apparatus further includes a verification module, where the verification module is configured to obtain information to be verified input by a user, verify the information to be verified, and notify the display module 412 to display the pre-stored information on a preset position of the target object in the first picture when the verification passes; or, the verification module is configured to verify the information to be verified, and notify the obtaining module 410 to start the camera when the verification passes, so as to obtain a first picture obtained by scanning the target object by the camera.
Optionally, the target object includes a borrowed article, and the pre-stored information associated with the borrowed article includes: at least one of the number of the borrowed article, the borrower, the borrowing deadline, the return time, and the return location.
Optionally, the target object includes a purchased commodity, and the pre-stored information associated with the purchased commodity includes at least one of a purchase time, a warranty date, and a service phone of the purchased commodity.
Optionally, the target object includes an electronic card and a passbook, and the pre-stored information associated with the electronic card/passbook includes at least one of a password, account opening person information, remaining amount, and validity period.
Optionally, the target object includes a photograph, and the pre-stored information associated with the photograph includes: at least one of a person in the photograph, a time of the photograph, and a place of the photograph.
Optionally, the target object includes a certificate, and the pre-stored information associated with the certificate includes: at least one of the time of issue, the place of issue, and the validity period.
The description of each module of the information display device may refer to the description of the corresponding step of the above method embodiment, which is not repeated herein.
Referring to fig. 5, fig. 5 is a schematic block diagram of a structure of an embodiment of an information display device according to the present application. The information display device comprises a processor 510 and a memory 511, a camera 512 and a display 513 coupled to the processor.
The processor 510 is configured to execute the computer program stored in the memory 511 to execute the above-mentioned information display method in conjunction with the camera 512 and the display 513.
The application also provides an embodiment of a storage device. The storage device of this embodiment stores a computer program, and the computer program can implement the steps of the information display method in any of the above embodiments when executed by the processor.
The storage device of this embodiment may be a medium that can store a computer program, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may also be a server that stores the computer program, and the server may send the stored computer program to another device for running, or may self-run the stored computer program.
It is to be understood that in the embodiments provided in this application, all embodiments are non-conflicting, i.e., can be combined with each other. In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. An information display method, comprising:
acquiring a first picture obtained by scanning a target object by a camera, and displaying the first picture;
searching prestored information associated with the target object in the first picture;
and displaying the pre-stored information on a preset position of the target object in the first picture.
2. The method of claim 1, further comprising:
acquiring a second picture obtained by scanning the target object by the camera;
displaying an information input box and receiving information input by a user through the information input box;
saving the information as pre-stored information associated with the target object in the second frame.
3. The method of claim 2, wherein the acquiring a first picture scanned by the camera on the target object or the acquiring a second picture scanned by the camera on the target object comprises:
acquiring a picture obtained by scanning a preset part or a label object on the target object by the camera;
the saving the information as pre-stored information associated with the target object in the second picture comprises:
extracting feature data of a preset part or a tag object of the target object from the picture, and storing the information as pre-stored information associated with the extracted feature data;
the searching for the pre-stored information associated with the target object in the first picture comprises:
and extracting the characteristic data of the preset part or the tag object of the target object from the picture, and searching prestored information associated with the extracted characteristic data.
4. The method of claim 3, wherein the taggant is a predetermined pattern disposed on the target object.
5. The method according to claim 1, wherein after the displaying the pre-stored information on the preset position of the target object in the first picture, the method further comprises:
and if the position of the target object in the first picture is detected to be changed, adjusting the display position of the pre-stored information in the first picture so as to keep the pre-stored information at the preset position of the target object in the first picture.
6. The method according to claim 1, wherein before the displaying the pre-stored information on the preset position of the target object in the first picture, further comprising:
acquiring information to be verified input by a user;
verifying the information to be verified, and displaying the pre-stored information on a preset position of the target object in the first picture when the information to be verified passes the verification;
or, before acquiring the first picture scanned by the camera on the target object, the method further includes:
acquiring information to be verified input by a user;
verifying the information to be verified, starting the camera when the information to be verified passes the verification, and executing a first picture obtained by scanning a target object by the camera;
the information to be verified comprises at least one of a password and a user biological characteristic.
7. The method of claim 1,
the target object comprises a borrowed article, and the pre-stored information associated with the borrowed article comprises: at least one of the number of the borrowed article, the borrower, the borrowing deadline, the return time and the return place; and/or the presence of a gas in the gas,
the target object comprises purchased commodities, and the prestored information associated with the purchased commodities comprises at least one of purchase time, warranty date and maintenance telephone of the purchased commodities; and/or the presence of a gas in the gas,
the target object comprises an electronic card and a bankbook, and the pre-stored information associated with the electronic card/the bankbook comprises at least one of a password, account opening person information, residual amount and validity period; and/or the presence of a gas in the gas,
the target object comprises a photograph, and the pre-stored information associated with the photograph comprises: at least one of a person in the photograph, a time of the photograph, and a place of the photograph; and/or the presence of a gas in the gas,
the target object comprises a certificate, and the pre-stored information associated with the certificate comprises: at least one of the time of issue, the place of issue, and the validity period.
8. An information display device characterized by comprising:
the acquisition module is used for acquiring a first picture obtained by scanning a target object by a camera and displaying the first picture;
the searching module is used for searching prestored information related to the target object in the first picture;
and the display module is used for displaying the pre-stored information on the preset position of the target object in the first picture.
9. An information display device comprising a processor and a memory, a camera and a display screen coupled to the processor, wherein,
the processor is configured to execute the memory-stored computer program to perform the method of any of claims 1 to 7 in conjunction with the camera and display screen.
10. A storage means, characterized in that a computer program is stored which can be run by a processor for implementing the method according to any one of claims 1-7.
CN202010441949.0A 2020-05-22 2020-05-22 Information display method, device, equipment and storage device Withdrawn CN111680191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010441949.0A CN111680191A (en) 2020-05-22 2020-05-22 Information display method, device, equipment and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010441949.0A CN111680191A (en) 2020-05-22 2020-05-22 Information display method, device, equipment and storage device

Publications (1)

Publication Number Publication Date
CN111680191A true CN111680191A (en) 2020-09-18

Family

ID=72434296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010441949.0A Withdrawn CN111680191A (en) 2020-05-22 2020-05-22 Information display method, device, equipment and storage device

Country Status (1)

Country Link
CN (1) CN111680191A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672064A (en) * 2021-03-18 2021-04-16 视云融聚(广州)科技有限公司 Algorithm scheduling method, system and equipment based on video region label

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093034A (en) * 2019-12-31 2020-05-01 维沃移动通信有限公司 Article searching method and electronic equipment
CN111159449A (en) * 2019-12-31 2020-05-15 维沃移动通信有限公司 Image display method and electronic equipment
CN111161037A (en) * 2019-12-31 2020-05-15 维沃移动通信有限公司 Information processing method and electronic equipment
CN111178305A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Information display method and head-mounted electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111093034A (en) * 2019-12-31 2020-05-01 维沃移动通信有限公司 Article searching method and electronic equipment
CN111159449A (en) * 2019-12-31 2020-05-15 维沃移动通信有限公司 Image display method and electronic equipment
CN111161037A (en) * 2019-12-31 2020-05-15 维沃移动通信有限公司 Information processing method and electronic equipment
CN111178305A (en) * 2019-12-31 2020-05-19 维沃移动通信有限公司 Information display method and head-mounted electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672064A (en) * 2021-03-18 2021-04-16 视云融聚(广州)科技有限公司 Algorithm scheduling method, system and equipment based on video region label

Similar Documents

Publication Publication Date Title
CN108229120B (en) Face unlocking method, face unlocking information registration device, face unlocking information registration equipment, face unlocking program and face unlocking information registration medium
US9576194B2 (en) Method and system for identity and age verification
EP3493088B1 (en) Security gesture authentication
US7647279B2 (en) Method to make transactions secure by means of cards having unique and non-reproducible identifiers
CN106792267B (en) A kind of picture and video information authenticity mark and the system and method that identify
US10789353B1 (en) System and method for augmented reality authentication of a user
CN110163053B (en) Method and device for generating negative sample for face recognition and computer equipment
CN108124093B (en) Method and system for preventing terminal photographing from counterfeiting
KR101635074B1 (en) Financial service providing method and system using mobile non-contact type real name confirmation
GB2501362A (en) Authentication of an online user using controllable illumination
EP3594879A1 (en) System and method for authenticating transactions from a mobile device
CN108875582A (en) Auth method, device, equipment, storage medium and program
CN111680191A (en) Information display method, device, equipment and storage device
RU2709649C2 (en) Remote registration system for mobile communication users
CN109345186B (en) Service handling method based on Internet of things and terminal equipment
CN112580615B (en) Living body authentication method and device and electronic equipment
RU188800U1 (en) Subscriber Identity Means
US20050144444A1 (en) Data card and authentication process therefor
KR102602213B1 (en) Electronic terminal apparatus that performs genuine product certification and additional information offer for the product based on two-dimensional code recognition and biometric information authentication and operating method thereof
CN115114557B (en) Page data acquisition method and device based on block chain
CN116597259B (en) Site information verification method and device, equipment, medium and product thereof
RU2716221C1 (en) Method of remote registration of a mobile communication user by means of a mobile communication device equipped with a shooting module and a touch screen
KR102602202B1 (en) Electronic terminal apparatus that can perform genuine product certification for the product based on augmented reality through the recognition of a two-dimensional code printed on the product and operating method thereof
EP4113334A1 (en) Method and system for automatic proofing of a remote recording
JP2023184013A (en) Age estimation system and age estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200918

WW01 Invention patent application withdrawn after publication