CN107463922B - Information display method, information matching method, corresponding devices and electronic equipment - Google Patents
Information display method, information matching method, corresponding devices and electronic equipment Download PDFInfo
- Publication number
- CN107463922B CN107463922B CN201710721134.6A CN201710721134A CN107463922B CN 107463922 B CN107463922 B CN 107463922B CN 201710721134 A CN201710721134 A CN 201710721134A CN 107463922 B CN107463922 B CN 107463922B
- Authority
- CN
- China
- Prior art keywords
- identification
- mark
- face image
- information
- order
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000004044 response Effects 0.000 claims description 53
- 238000009877 rendering Methods 0.000 claims description 36
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 11
- 238000003032 molecular docking Methods 0.000 abstract description 5
- 230000000875 corresponding effect Effects 0.000 description 57
- 238000010586 diagram Methods 0.000 description 12
- 230000001815 facial effect Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 6
- 230000003190 augmentative effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/01—Customer relationship services
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- Finance (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An information display method, an information matching method, corresponding devices and electronic equipment are disclosed. The embodiment of the invention acquires the image of the waiting service provider in real time, intercepts the face image, uploads the face image to the server for face recognition, judges whether the service provider binds the order of the client side uploading the image after recognizing the user identification of the service provider corresponding to the acquired image, and renders and displays the additional information according to the matching result to mark the face image. Therefore, the user can quickly identify the service provider serving for the user through the application program of the client, service docking efficiency is improved, and user experience is improved.
Description
Technical Field
The invention relates to an augmented reality technology, in particular to an information display method, an information matching method, a corresponding device and electronic equipment.
Background
The mobile internet can realize various O2O (Online To Offline) services in combination with services. Merchants or practitioners can provide a wide variety of services through the internet-based O2O business system. In some application scenarios, a user may face multiple service providers simultaneously, and it may be difficult to identify which of the service providers is the person who matches his order. For example, in an internet take-out service, a purchaser may wait for a plurality of dispatchers wearing the same uniform at the same time when taking a meal downstairs. The purchaser must query one by one to know which dispenser is responsible for dispensing the goods purchased by the purchaser. This reduces the efficiency of service provisioning while also impacting the user experience.
Disclosure of Invention
In view of this, embodiments of the present invention provide an information display method, an information matching method, a corresponding apparatus, and an electronic device, so as to provide prompt information for a user in an augmented reality manner, improve service efficiency, and improve user experience.
According to a first aspect of embodiments of the present invention, there is provided an information display method, the method including:
acquiring an image in real time and intercepting a face image in the image;
sending an identification request to request matching of the face image and the identification mark; and
and rendering and displaying additional information on the image acquired in real time according to the matching result so as to mark the face image.
According to a second aspect of the embodiments of the present invention, there is provided an information matching method, the method including:
receiving an identification request, wherein the identification request comprises a face image to be matched and an identification mark;
identifying the face image in the identification request according to the pre-registered face image;
when the recognition is successful, acquiring a user identifier corresponding to the face image in the recognition request;
sending an identification request response to a client corresponding to the identification mark according to the matching result of the user mark and the order corresponding to the identification mark;
and the identification request response is used for prompting the matching result of the face image to be matched and the identification mark.
According to a third aspect of embodiments of the present invention, there is provided an information display apparatus, the apparatus including:
the image processing unit is used for acquiring an image in real time and intercepting a face image in the image;
the request unit is used for sending a recognition request to request the matching of the face image and the recognition identifier; and
and the rendering display unit is used for rendering and displaying the additional information on the image acquired in real time according to the matching result so as to mark the matched face image.
According to a fourth aspect of the embodiments of the present invention, there is provided an information matching apparatus, the apparatus including:
the system comprises a receiving unit, a matching unit and a matching unit, wherein the receiving unit is used for receiving an identification request which comprises a face image to be matched and an identification mark;
the recognition unit is used for recognizing the face image in the recognition request according to the pre-registered face image;
the user identification obtaining unit is used for obtaining the user identification corresponding to the face image in the identification request when the identification is successful;
the response unit is used for sending an identification request response to the client corresponding to the identification mark according to the matching result of the user identification and the order corresponding to the identification mark;
and the identification request response is used for prompting the matching result of the face image to be matched and the identification mark.
According to a fifth aspect of embodiments of the present invention there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of the first or second aspect.
According to a sixth aspect of embodiments of the present invention, there is provided an electronic device comprising a memory and a processor, the memory being configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any one of the first to second aspects.
The embodiment of the invention acquires the image of the waiting service provider in real time, intercepts the face image and uploads the face image to the server side for face recognition, judges whether the service provider binds the order of the client side uploading the image after recognizing the user identification of the service provider corresponding to the acquired image, and renders and displays the additional information according to the matching result to mark the face image. Therefore, the user can quickly identify the service provider serving for the user through the application program of the mobile terminal, service docking efficiency is improved, and user experience is improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a hardware system architecture of an embodiment of the present invention;
FIG. 2 is a flow chart of an information display method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a client interface of an embodiment of the present invention;
FIG. 4 is another schematic diagram of a client interface of an embodiment of the present invention;
FIG. 5 is a flow chart of a method for displaying information at a client side according to an embodiment of the present invention;
FIG. 6 is a flow chart of an information matching method at the server side according to an embodiment of the present invention;
FIG. 7 is a schematic view of an information display system of an embodiment of the present invention;
FIG. 8 is a schematic diagram of a client for implementing a method of an embodiment of the invention;
fig. 9 is a schematic diagram of a server for implementing the method of an embodiment of the invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present invention, "a plurality" means two or more unless otherwise specified.
FIG. 1 is a diagram of a hardware system architecture of an embodiment of the present invention. As shown in fig. 1, the system architecture includes a client 1, a client 2, and a server 3. The client 1 and the client 2 are connected to the server 3 through a network. The client 1 registers in advance with the server 3 so that it can receive the tasks assigned by the server 3. The client 2 may purchase services by logging in to the server 3. After purchase, the server 3 assigns the order to a specific client 1. The client 1 as a service provider has a corresponding user identification on the server 3 side. The client 2 acts as a consumer and has a corresponding user identification on the server side. The types of users are different, and the functions which can be used after the corresponding user identifications are logged in are different. Each order binds a service provider's user identification and a consumer's user identification to pair a service provider and a service purchaser. However, as described above, in some application scenarios, a user may face a plurality of service providers of different orders at the same time, and it is difficult to identify which of the service providers is the person matching his order.
In the following description, the embodiments of the present invention are further described by taking an internet takeout scenario as an example. It should be understood that the embodiments of the present invention are not limited to the above application scenarios, and may also be applied to various scenarios, such as express delivery, home service, and the like.
Fig. 2 is a flowchart of an information display method according to an embodiment of the present invention. As shown in fig. 2, the information display method according to the embodiment of the present invention includes:
and S100, acquiring an image in real time at the client side and intercepting a face image in the image.
Specifically, the user may be prompted to click an instruction to start the image capturing apparatus, or the image capturing apparatus may be automatically started after receiving an arrival message from the client 1 of the service provider.
After the image acquisition device is started, the image acquisition device acquires images within the lens range in real time and displays the images on a display in real time. This allows the content displayed on the mask to be consistent with the actual scene at all times.
Meanwhile, the human face part in the image can be detected by carrying out human face detection on one or more frames in the dynamic image acquired in real time. And intercepting the detected face part according to a preset rule, so that a face image can be obtained. When multiple people exist in the image, the face detection algorithm can detect and intercept multiple face images. The face detection can be implemented by various existing image processing algorithms, such as a reference template method, a face rule method, a feature sub-face method, a sample identification method, and the like.
In an alternative implementation, after a face is detected, a marker may be rendered and displayed around each face image in real time, e.g., the box shown in fig. 3. Therefore, the face detection success can be fed back to the user, and the user experience is improved. Further, the rendered markers may be caused to track face movement in the dynamic image by a face tracking algorithm.
The face recognition and face tracking can be realized by an algorithm module preset in the client 2.
And step S200, sending a recognition request to request for matching the face image with the recognition identifier.
Specifically, the client 2 sends the identification request to the server 3 through the network.
In an alternative implementation, if a plurality of face images are obtained by interception, the client 2 calls the interface to send a plurality of recognition requests in parallel, and each recognition request comprises one of the face images obtained by interception and a recognition identifier. Therefore, the subsequent server 3 can match each face image one by one conveniently.
In this embodiment, the identification mark and the order are in a corresponding relationship. This allows the client 2 to request to determine whether the captured facial image matches the order, and further identify which distributor in the current real-time captured dynamic image is for distributing the goods for the user bound to the client 2. The identification mark may be a user mark of the purchaser, so that, on the server 3 side, the order corresponding to the user can be obtained through the user mark query. Alternatively, the identification mark may be a current order mark directly for the client 2.
And step S300, identifying the face image in the identification request according to the pre-registered face image.
The distributor can pre-register the face image of the distributor on the side of the server 3 through the client 1 or other terminal equipment, so as to establish the corresponding relation between the pre-registered face image and the user identification of the distributor. Thus, the server 3, upon receiving the recognition request, can compare the similarity between the face image in the recognition request and the pre-registered face image to perform the recognition.
In an alternative implementation, an HTTP (hypertext transfer protocol) request service may be submitted to the registration website through a POST method to invoke the service to pre-register the face image. The face may also be identified by calling the service in a similar way.
And step S400, when the identification is successful, acquiring the user identification corresponding to the face image in the identification request.
In an alternative implementation, when the similarity is higher than a predetermined threshold (e.g., 95%), it is determined that the two match, and the matching user identifier is assigned to a variable that characterizes the identity of the dispenser. When the degree of acquaintance does not satisfy the above requirement, the variable is held at a predetermined value (for example, 0). Therefore, after the whole comparison process is finished, if the variable representing the identity of the distributor is not a preset value, the identification is successful, and otherwise, the matched distributor is not identified.
For each identification request, a variable characterizing the identity of the distributor is finally obtained. The variable is a predetermined value indicating that the facial image in the recognition request failed to recognize or did not match the dispenser. The variable is the user identification of the specific distributor, which indicates that the identification is successful.
In step S500, an identification request response is sent to the client corresponding to the identification identifier according to the matching result between the user identifier and the order corresponding to the identification identifier.
Specifically, when the user identifier is matched with the order corresponding to the identification identifier, an identification request response representing the matching of the face image to be matched and the identification identifier is sent. And when the user identification is not matched with the order corresponding to the identification, sending an identification request response representing that the face image to be matched is not matched with the identification.
In an internet take-away system, the user identification of the deliverer is correlated to the order it is responsible for delivering. Typically, a single distributor will associate multiple orders simultaneously. Therefore, whether the user identifier of the distributor obtained by identification corresponds to the order represented by the identification identifier in the identification request or not can be judged, whether the distributor is responsible for distributing the specific order or not can be judged, and whether prompt is given to the user on the display or not can be determined based on the judgment result.
When the identification is the user identification of the client 2 currently logged in, step S500 may include:
step S510, acquiring a corresponding order identification according to the user identification of the client in the identification request;
step S520, when the user identifier of the distributor associated with the order identifier is the same as the user identifier obtained through facial image recognition, sending an identification request response to the client corresponding to the user identifier to prompt that the matching is successful.
When the identification mark is the order mark of the current order of the client 2, step S500 may directly determine whether the user mark of the distributor associated with the order mark is the user mark obtained through face image recognition, and if so, send an identification request response to the client corresponding to the user mark to prompt that the matching is successful.
In an alternative implementation, the recognition request response may be a message having an identification bit. The client receiving the identification request response may determine the matching result according to the identification bit, for example, when the identification bit is 0, the matching is failed, that is, when the identification bit is 1 in the corresponding identification request, the matching is successful.
In another alternative implementation, the identification request responses that characterize a successful match carry order information, while the identification request responses that characterize a failed match do not carry order information. For example, for an internet takeaway system, the order information may be the name of the item, if the order is to deliver a specific item. This may allow the client 2 to directly extract order information for rendering display after receiving the identification request response. The user can conveniently and intuitively determine the deliverer who delivers the self order.
And S600, rendering and displaying additional information on the image acquired in real time according to the identification request response to mark the matched face image.
In an optional implementation manner, when the matching result is that the facial image and the identification mark are matched, rendering and displaying first additional information to mark the matched facial image. Thus, in this step, the client 2 renders and displays the face image matching the additional information mark on the display in response to the identification request, thereby indicating to the user which dispenser is responsible for his order from among the dynamic images acquired in real time. The user can directly interface with the corresponding delivery personnel to receive service without inquiring other delivery personnel in the field one by one. Therefore, service docking efficiency is improved, and user experience is improved. Meanwhile, the method can prevent the user from revealing the content or personal privacy of the order in the process of communicating with the distributor, and improves the safety of the service process.
For matching face images, the additional information may be an icon pointing to the face image to be marked, or a box showing the face tracking status in the previous face tracking may be highlighted (e.g., bolded and rendered in a conspicuous color).
In an optional implementation manner, the first additional information is order information corresponding to the identification mark. Correspondingly, step S600 includes:
and step S610, obtaining order information corresponding to the identification mark.
If the order information is stored in the client 2, the current order information can be directly extracted and displayed.
If the identification request response returned in step S500 contains order information, the identification request response may be read to obtain the order information.
And S620, rendering and displaying the order information on the image acquired in real time to mark a matched face image.
In particular, the order information may be displayed in a variety of ways to mark the facial image. For example, the text of the order information may be displayed directly above the face image. For another example, as shown in fig. 4, an information frame pointing to the face image is rendered and displayed near the face image matched in the image currently acquired in real time to display the additional information.
Thus, it is possible to obtain both an instruction for the purchased commodity and an instruction of the responsible distributor in an augmented reality manner.
In another optional implementation manner, when the matching result is that the face image and the identification mark do not match, rendering and displaying second additional information to mark the face image which does not match. Therefore, by marking the unmatched face images, the user can identify the distributor serving for the user through an exclusion method. The second additional information may be that "X" is displayed on the unmatched face image.
In still another alternative implementation manner, when the matching result is that the facial image and the identification mark are matched, the first additional information is rendered and displayed to mark the matched facial image. Meanwhile, when the matching result is that the face image is not matched with the identification mark, second additional information is rendered and displayed to mark the unmatched face image. That is, if there are multiple persons in the image acquired in real time, the client intercepts and acquires multiple face images and correspondingly transmits multiple identification requests. If one of the face images is matched with the identification mark, other face images are not matched. In the implementation manner, the client displays the first additional information by rendering to mark the matched face image, and simultaneously displays the second additional information by rendering to mark the unmatched face image. This provides redundant information to the user to improve the accuracy of the user identification.
In this embodiment, an image of a waiting service provider is acquired in real time, a face image is intercepted and uploaded to a server side for face recognition, after a user identifier of the service provider corresponding to the image is obtained through recognition, whether the service provider binds an order of a client side uploading the image is judged, and additional information is rendered and displayed according to a matching result to mark the face image. Therefore, the user can quickly identify the service provider serving for the user through the application program of the mobile terminal, service docking efficiency is improved, and user experience is improved.
Fig. 5 is a flowchart of an information display method on the client side according to an embodiment of the present invention. As shown in fig. 5, the information display method of the present embodiment includes:
and S1000, acquiring an image in real time and intercepting a face image in the image.
And S2000, sending a recognition request to request for matching the face image with the recognition identifier.
The identification request comprises the intercepted face image and the identification mark. The identification mark corresponds to the order, and may be a user mark of a client sending the identification request, or may be a direct order mark.
Specifically, if a plurality of face images are intercepted, the interface is called to send a plurality of recognition requests in a parallel mode, and each recognition request comprises one of the intercepted face images and a recognition identifier. Therefore, the subsequent server 3 can match each face image one by one conveniently.
And step S3000, rendering and displaying additional information on the image acquired in real time according to the matching result so as to mark the matched face image.
Specifically, when the matching result is that the face image is matched with the identification mark, rendering and displaying first additional information to mark the matched face image; and/or when the matching result is that the face image and the identification mark are not matched, rendering and displaying second additional information to mark the unmatched face image. The matching of the face image and the identification mark means that the corresponding user mark is matched with the order corresponding to the identification mark.
In this step, the client 2 renders and displays the face image matching the additional information mark on the display according to the recognition request response, thereby indicating to the user which dispenser is responsible for his order from among the dynamic images acquired in real time. The user can directly interface with the corresponding delivery personnel to receive service without inquiring other delivery personnel in the field one by one. Therefore, service docking efficiency is improved, and user experience is improved. Meanwhile, the method can prevent the user from revealing the content or personal privacy of the order in the process of communicating with the distributor, and improves the safety of the service process.
The first additional information may be an icon pointing to the face image to be marked, or a frame showing the face tracking status in the previous face tracking may be highlighted (e.g., bolded and rendered in a conspicuous color). The second additional information may be "x" displayed at a middle position of the box.
In an optional implementation manner, the additional information is order information corresponding to the identification. Correspondingly, step S3000 includes:
and step S3100, obtaining order information corresponding to the identification mark.
If the order information is stored in the client 2, the current order information can be directly extracted and displayed.
If order information is contained in the identification request response corresponding to the identification request, the identification request response can be read to obtain the order information.
And step S3200, rendering and displaying the order information on a display to mark the matched face image.
In particular, the order information may be displayed in a variety of ways to mark the facial image. For example, the text of the order information may be displayed directly above the face image. For another example, as shown in fig. 4, an information frame pointing to the face image is rendered and displayed near the face image matched in the image currently acquired in real time to display the additional information.
Thus, it is possible to obtain both an instruction for the purchased commodity and an instruction of the responsible distributor in an augmented reality manner.
Fig. 6 is a flowchart of an information matching method at the server side according to an embodiment of the present invention. The information matching method of the embodiment is used for carrying out face recognition and matching according to the recognition request uploaded by the client, and feeding back the matching result to the client, so that the client can conveniently indicate the service provider in an augmented reality mode. As shown in fig. 6, the method of the present embodiment includes:
and step S4000, receiving an identification request, wherein the identification request comprises a face image to be matched and an identification mark.
The identification mark and the order form a corresponding relation. The identification identifier may be a user identifier of the client sending the identification request, or may be an order identifier.
And step S5000, identifying the face image in the identification request according to the pre-registered face image.
The distributor can pre-register the face image of the distributor on the side of the server 3 through the client 1 or other terminal equipment, and establish the corresponding relation between the pre-registered face image and the user identification of the distributor. Thus, the server 3 can compare the similarity between the face image in the recognition request and the pre-registered face image after receiving the recognition request.
In an alternative implementation, an HTTP (hypertext transfer protocol) request service may be submitted to the registration website through a POST method to invoke the service to pre-register the face image. The face may also be identified by calling the service in a similar way.
And step S6000, acquiring the user identification corresponding to the matched pre-registered face image when the identification is successful.
In an alternative implementation, when the similarity is higher than a predetermined threshold (e.g., 95), it is determined that the two match, and the matching user identifier is assigned to a variable that characterizes the identity of the dispenser. When the degree of acquaintance does not satisfy the above requirement, the variable is held at a predetermined value (for example, 0). Therefore, after the whole comparison process is finished, if the variable representing the identity of the distributor is not a preset value, the identification is successful, and otherwise, the matched distributor is not identified.
For each identification request, a variable characterizing the identity of the distributor is finally obtained. The variable is a predetermined value indicating that the facial image in the recognition request failed to recognize or did not match the dispenser. The variable is the user identification of the specific distributor, which indicates that the identification is successful.
And S7000, when the user identification corresponds to the order corresponding to the identification, sending an identification request response to the client corresponding to the identification to prompt that the matching is successful.
In an internet take-away system, the user identification of the deliverer is correlated to the order it is responsible for delivering. Typically, a single distributor will associate multiple orders simultaneously. Therefore, if the user identifier of the distributor obtained by identification corresponds to the order characterized by the identification identifier in the identification request, it can be determined that the distributor is responsible for distributing the specific order.
When the identification is the user identification of the client 2 currently logged in, step S500 may include:
and S7100, acquiring a corresponding order identification according to the user identification of the client sending the identification request.
And S7200, when the user identifier of the distributor related to the order identifier is the same as the user identifier obtained through facial image recognition, sending a recognition request response to the client corresponding to the user identifier to prompt that the matching is successful.
When the identification mark is the order mark of the current order of the client 2, it can be directly judged whether the user mark of the distributor associated with the order mark is the user mark obtained by face image identification, if so, an identification request response is sent to the client corresponding to the user mark to prompt the successful matching.
In an alternative implementation, the recognition request response may be a message having an identification bit. The client receiving the identification request response may determine the matching result according to the identification bit, for example, when the identification bit is 0, the matching is failed, that is, when the identification bit is 1 in the corresponding identification request, the matching is successful.
In another alternative implementation, the identification request responses that characterize a successful match carry order information, while the identification request responses that characterize a failed match do not carry order information. For example, for an internet takeaway system, the order information may be the name of the item, if the order is to deliver a specific item. This may allow the client 2 to directly extract order information for rendering display after receiving the identification request response. The user can conveniently and intuitively determine the deliverer who delivers the self order.
The embodiment can effectively match the face image with the identification mark in response to the identification request sent by the client, so that the matching result is returned to the client to judge which face image is prompted.
Fig. 7 is a schematic diagram of an information display system of an embodiment of the present invention. As shown in fig. 7, the information display system of the embodiment of the present invention includes an information display apparatus a and an information matching apparatus B. The information display apparatus a includes, among others, an image processing unit 71, a request unit 72, and a rendering display unit 73.
The image processing unit 71 is configured to acquire an image in real time and intercept a face image in the image.
The requesting unit 72 is configured to send a recognition request for requesting matching between the face image and the recognition identifier. Wherein, when there are a plurality of face images intercepted by the image processing unit 71, the request unit 73 sends a plurality of recognition requests in parallel, each recognition request including one of the face images obtained by the interception and the recognition identifier. Thus, the information matching device B can perform matching one by one conveniently.
The rendering display unit 73 is configured to render and display additional information on the image acquired in real time according to the matching result to mark the matched face image. The matching of the face image and the identification mark means that the user mark corresponding to the face image is matched with the order corresponding to the identification mark. In this embodiment, the identification mark and the order form a corresponding relationship. Preferably, the identification identifier is a user identifier of the client sending the identification request or an order identifier of the current order.
Specifically, the rendering and displaying unit 73 is configured to render and display the first additional information to mark the matched face image when the matching result is that the face image and the identification identifier are matched, and/or render and display the second additional information to mark the unmatched face image when the matching result is that the face image and the identification identifier are unmatched.
In an optional implementation manner, the additional information is order information corresponding to the identification. The rendering display unit 73 further includes an acquisition sub-unit 73a and an order rendering sub-unit 73 b. The obtaining subunit 73a is configured to obtain the order information. The order rendering subunit 73b is configured to render and display the order information on the display to mark the matched face image.
Further, the obtaining subunit 73a is configured to obtain current order information stored locally, or, when the identification request response includes the order information, to obtain the order information from the identification request response corresponding to the identification request.
Further, the rendering display unit 73 is configured to render and display an information frame pointing to the face image in the vicinity of the face image matched in the image currently acquired in real time to display the additional information. Wherein, the information frame may include the order information.
Correspondingly, the information matching device B comprises a receiving unit 74, a recognition unit 75, a user identification obtaining unit 76 and a response unit 77.
The receiving unit 74 is configured to receive a recognition request, where the recognition request includes a face image to be matched and a recognition identifier.
The recognition unit 75 is configured to recognize the face image in the recognition request from the pre-registered face image.
The user identifier obtaining unit 76 is configured to obtain, when the recognition is successful, the user identifier corresponding to the face image in the recognition request.
The response unit 77 is configured to send an identification request response to the client corresponding to the identification identifier according to a matching result between the user identifier and the order corresponding to the identification identifier.
And the identification request response is used for prompting the matching result of the face image to be matched and the identification mark.
Specifically, the response unit 77 sends an identification request response representing that the face image to be matched and the identification mark are matched when the user identification matches the order corresponding to the identification mark. And when the user identification is not matched with the order corresponding to the identification, sending an identification request response representing that the face image to be matched is not matched with the identification.
In an optional implementation manner, when the user identifier is associated with the order corresponding to the identification identifier, the response unit 77 determines that the user identifier matches the order corresponding to the identification identifier.
Further, the order information corresponding to the identification identifier may be included in the identification request response. This may allow the information display apparatus a to extract order information to prompt the user after receiving the identification request response.
It should be understood that the method and apparatus of the embodiments of the present invention are not only applicable to internet takeaway systems, but also applicable to other mobile internet applications and web-based applications that require entry recall.
Fig. 8 is a schematic diagram of a client for implementing a method of an embodiment of the invention. Client 1 includes a display 81, a memory 82 (which may include one or more computer-readable storage media), a storage controller 83, one or more processing units (CPUs) 84, a peripheral interface 85, radio frequency circuitry 86, an input/output (I/O) subsystem 87, and one or more optical sensors 88 that can acquire images. These components may communicate over one or more communication buses or signal lines 89. It should be understood that the client 8 shown in fig. 8 is only one example, and that the client 8 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of components.
The memory 82 may store, among other things, software components such as an operating system, communication modules, interaction modules, and application programs. Each of the modules and applications described above corresponds to a set of executable program instructions that perform one or more functions and methods described in embodiments of the invention.
The electronic device shown in fig. 9 is a general-purpose data processing apparatus, for example, a server. The electronic device comprises a general purpose computer hardware structure including at least a processor 91 and a memory 92. The processor 91 and the memory 92 are connected by a bus 93. The memory 92 is adapted to store instructions or programs executable by the processor 91. The processor 91 may be a stand-alone microprocessor or may be a collection of one or more microprocessors. Thus, the processor 91 implements the processing of data and the control of other devices by executing instructions stored by the memory 92 to perform the method flows of embodiments of the present invention as described above. The bus 93 connects the above components together, and also connects the above components to a display controller 94 and a display device and an input/output (I/O) device 95. Input/output (I/O) devices 95 may be a mouse, keyboard, modem, network interface, touch input device, motion sensing input device, printer, and other devices known in the art. Typically, the input/output devices 95 are coupled to the system through an input/output (I/O) controller 96.
The flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention described above illustrate various aspects of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Also, as will be appreciated by one skilled in the art, aspects of embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, various aspects of embodiments of the invention may take the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. Further, aspects of the invention may take the form of: a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer-readable media may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of embodiments of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to: electromagnetic, optical, or any suitable combination thereof. The computer readable signal medium may be any of the following computer readable media: is not a computer readable storage medium and may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including: object oriented programming languages such as Java, Smalltalk, C + +, PHP, Python, and the like; and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package; executing in part on a user computer and in part on a remote computer; or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (18)
1. An information display method, characterized in that the method comprises:
acquiring an image in real time and intercepting a face image in the image;
sending an identification request to request matching of the face image and the identification mark; and
rendering and displaying additional information on the image acquired in real time according to the matching result to mark the face image;
wherein rendering and displaying additional information on a display to mark the face image according to the matching result comprises:
when the matching result is that the face image is matched with the identification mark, rendering and displaying first additional information to mark the matched face image; and/or
When the matching result is that the face image is not matched with the identification mark, rendering and displaying second additional information to mark the unmatched face image;
the matching of the face image and the identification mark means that the user mark corresponding to the face image is matched with the order corresponding to the identification mark.
2. The information display method according to claim 1, wherein the first additional information is order information corresponding to the identification mark;
wherein rendering and displaying the first additional information to mark the matched face image comprises:
acquiring the order information; and
and rendering and displaying the order information on the image acquired in real time to mark the matched face image.
3. The information display method according to claim 2, wherein acquiring the order information includes:
acquiring locally stored current order information; or
And acquiring the order information from the identification request response corresponding to the identification request.
4. The information display method according to claim 1, wherein the identification mark is a user mark of a client sending the identification request or an order mark of a current order.
5. The information display method according to claim 1, wherein the sending of the identification request includes:
and sending a plurality of recognition requests in parallel, wherein the recognition requests comprise one of the intercepted face images and the recognition identification.
6. An information matching method, characterized in that the method comprises:
receiving an identification request, wherein the identification request comprises a face image to be matched and an identification mark;
identifying the face image in the identification request according to the pre-registered face image;
when the recognition is successful, acquiring a user identifier corresponding to the face image in the recognition request;
sending an identification request response to a client corresponding to the identification mark according to the matching result of the user identification and the order corresponding to the identification mark;
and the identification request response is used for prompting the matching result of the face image to be matched and the identification mark.
7. The information matching method according to claim 6, wherein when the user identifier is associated with the order corresponding to the identification identifier, it is determined that the user identifier matches the order corresponding to the identification identifier;
the identification mark is a user mark of a client side sending the identification request or an order mark of a current order.
8. The information matching method according to claim 6, wherein the identification request response includes order information corresponding to the identification identifier.
9. An information display apparatus, characterized in that the apparatus comprises:
the image processing unit is used for acquiring an image in real time and intercepting a face image in the image;
the request unit is used for sending a recognition request to request the matching of the face image and the recognition identifier; and
the rendering display unit is used for rendering and displaying the additional information on the image acquired in real time according to the matching result so as to mark the matched face image;
the rendering display unit is used for rendering and displaying first additional information to mark a matched face image when the matching result is that the face image is matched with the identification identifier, and/or rendering and displaying second additional information to mark a unmatched face image when the matching result is that the face image is not matched with the identification identifier;
the matching of the face image and the identification mark means that the user mark corresponding to the face image is matched with the order corresponding to the identification mark.
10. The information display device according to claim 9, wherein the first additional information is order information corresponding to the identification mark;
the rendering display unit includes:
the obtaining subunit is used for obtaining the order information; and
and the order rendering subunit is used for rendering and displaying the order information on the image acquired in real time so as to mark the matched face image.
11. The information display device according to claim 10, wherein the obtaining subunit is configured to obtain locally stored current order information, or is configured to obtain the order information from an identification request response corresponding to an identification request.
12. The information display device according to claim 9, wherein the identification mark is a user mark of a client that sends an identification request or an order mark of a current order.
13. The information display device according to claim 9, wherein the request unit is configured to send a plurality of recognition requests in parallel, the recognition requests including the recognition flag and one of the face images obtained by the interception.
14. An information matching apparatus, characterized in that the apparatus comprises:
the system comprises a receiving unit, a matching unit and a matching unit, wherein the receiving unit is used for receiving an identification request which comprises a face image to be matched and an identification mark;
the recognition unit is used for recognizing the face image in the recognition request according to the pre-registered face image;
the user identification obtaining unit is used for obtaining the user identification corresponding to the face image in the identification request when the identification is successful;
the response unit is used for sending an identification request response to the client corresponding to the identification mark according to the matching result of the user identification and the order corresponding to the identification mark;
and the identification request response is used for prompting the matching result of the face image to be matched and the identification mark.
15. The information matching apparatus according to claim 14, wherein the response unit determines that the user identifier matches the order corresponding to the identification identifier when the user identifier is associated with the order corresponding to the identification identifier;
the identification mark is a user mark of a client side sending the identification request or an order mark of a current order.
16. The information matching apparatus according to claim 14, wherein order information corresponding to the identification identifier is included in the identification request response.
17. A computer-readable storage medium on which computer program instructions are stored, which computer program instructions, when executed by a processor, implement the method of any one of claims 1-8.
18. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721134.6A CN107463922B (en) | 2017-08-17 | 2017-08-17 | Information display method, information matching method, corresponding devices and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721134.6A CN107463922B (en) | 2017-08-17 | 2017-08-17 | Information display method, information matching method, corresponding devices and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107463922A CN107463922A (en) | 2017-12-12 |
CN107463922B true CN107463922B (en) | 2020-02-14 |
Family
ID=60549374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710721134.6A Expired - Fee Related CN107463922B (en) | 2017-08-17 | 2017-08-17 | Information display method, information matching method, corresponding devices and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107463922B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063534B (en) * | 2018-05-25 | 2022-07-22 | 隆正信息科技有限公司 | Shopping identification and ideographic method based on image |
CN112446705B (en) * | 2019-09-04 | 2024-09-13 | 浙江天猫技术有限公司 | Settlement method and settlement device |
CN110990727B (en) * | 2019-11-01 | 2024-05-10 | 贝壳技术有限公司 | Broker information display method, device, storage medium and equipment |
CN113065493A (en) * | 2021-04-12 | 2021-07-02 | 北京京东振世信息技术有限公司 | Service state determination method, device, equipment, system and storage medium |
CN112990104A (en) * | 2021-04-19 | 2021-06-18 | 南京芯视元电子有限公司 | Augmented reality display device, control method thereof and intelligent head-mounted equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102368746A (en) * | 2011-09-08 | 2012-03-07 | 宇龙计算机通信科技(深圳)有限公司 | Picture information promotion method and apparatus thereof |
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN104517227A (en) * | 2013-09-27 | 2015-04-15 | 上海酷远物联网科技有限公司 | Method and system for shopping through Internet or Internet of things |
CN104616159A (en) * | 2015-02-14 | 2015-05-13 | 成都我来啦网格信息技术有限公司 | Intelligent locker express item picking method based on cooperation E-business |
CN105335709A (en) * | 2015-10-21 | 2016-02-17 | 奇酷互联网络科技(深圳)有限公司 | Face identification display method, face identification display device and terminal |
CN105426829A (en) * | 2015-11-10 | 2016-03-23 | 深圳Tcl新技术有限公司 | Video classification method and device based on face image |
CN105469506A (en) * | 2015-12-09 | 2016-04-06 | 北京京东尚科信息技术有限公司 | Method and device for achieving product extraction based on face identification |
CN106600318A (en) * | 2016-12-07 | 2017-04-26 | 乐视控股(北京)有限公司 | Information matching method and device and electronic device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150227780A1 (en) * | 2014-02-13 | 2015-08-13 | FacialNetwork, Inc. | Method and apparatus for determining identity and programing based on image features |
-
2017
- 2017-08-17 CN CN201710721134.6A patent/CN107463922B/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102368746A (en) * | 2011-09-08 | 2012-03-07 | 宇龙计算机通信科技(深圳)有限公司 | Picture information promotion method and apparatus thereof |
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN104517227A (en) * | 2013-09-27 | 2015-04-15 | 上海酷远物联网科技有限公司 | Method and system for shopping through Internet or Internet of things |
CN104616159A (en) * | 2015-02-14 | 2015-05-13 | 成都我来啦网格信息技术有限公司 | Intelligent locker express item picking method based on cooperation E-business |
CN105335709A (en) * | 2015-10-21 | 2016-02-17 | 奇酷互联网络科技(深圳)有限公司 | Face identification display method, face identification display device and terminal |
CN105426829A (en) * | 2015-11-10 | 2016-03-23 | 深圳Tcl新技术有限公司 | Video classification method and device based on face image |
CN105469506A (en) * | 2015-12-09 | 2016-04-06 | 北京京东尚科信息技术有限公司 | Method and device for achieving product extraction based on face identification |
CN106600318A (en) * | 2016-12-07 | 2017-04-26 | 乐视控股(北京)有限公司 | Information matching method and device and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN107463922A (en) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107463922B (en) | Information display method, information matching method, corresponding devices and electronic equipment | |
US20180033015A1 (en) | Automated queuing system | |
JP2014170314A (en) | Information processing system, information processing method, and program | |
JP6865321B1 (en) | Entrance / exit management device, entrance / exit management method, entrance / exit management program, and entrance / exit management system | |
US10109096B2 (en) | Facilitating dynamic across-network location determination using augmented reality display devices | |
JP2021082356A (en) | Ordering system utilizing personal information | |
KR20230015272A (en) | Unmanned information terminal using artificial intelligence, order management server, and method for providing order information | |
US11804078B2 (en) | Information processing apparatus, control method, and program | |
JP6788710B1 (en) | Image output device and image output method | |
JP2006235865A (en) | Support instruction system, support instruction decision apparatus, support instruction method and support instruction decision program | |
EP3232393A1 (en) | Electronic transaction system, method and program | |
JP7036300B1 (en) | System, authentication method, authentication terminal, authentication terminal control method and program | |
WO2021181638A1 (en) | Information processing device, information processing method, and computer-readable recording medium | |
JP2016009309A (en) | Service support device, service support method, and program | |
CN111061451A (en) | Information processing method, device and system | |
US20230030754A1 (en) | Information processing server, information processing system, determination device, and method | |
US12002073B2 (en) | Information display terminal, information transmission method, and computer program | |
JP7529156B2 (en) | Information processing device, information processing system, and information processing method | |
WO2024003985A1 (en) | Server device, system, server device control method, and storage medium | |
CN113034198B (en) | User portrait data establishing method and device | |
TWM539114U (en) | Financial customer service auxiliary system | |
JP2023099845A (en) | Order system using personal information | |
KR20160132584A (en) | Similar user identification apparatus, system and the method | |
CN116596590A (en) | Unified issuing method and system for multiclass rights and interests | |
JPWO2022070253A5 (en) | Authentication terminal, authentication system, authentication terminal control method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100085 Beijing, Haidian District on the road to the information on the ground floor of the 1 to the 3 floor of the 2 floor, room 11, 202 Applicant after: Beijing Xingxuan Technology Co.,Ltd. Address before: 100085 Room 202, floor 2, whole building, floor 1-3, No. 11, Shangdi Information Road, Haidian District, Beijing Applicant before: Beijing Xiaodu Information Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200214 |
|
CF01 | Termination of patent right due to non-payment of annual fee |