WO2022257044A1 - 交互方法、交互系统及电子设备 - Google Patents

交互方法、交互系统及电子设备 Download PDF

Info

Publication number
WO2022257044A1
WO2022257044A1 PCT/CN2021/099176 CN2021099176W WO2022257044A1 WO 2022257044 A1 WO2022257044 A1 WO 2022257044A1 CN 2021099176 W CN2021099176 W CN 2021099176W WO 2022257044 A1 WO2022257044 A1 WO 2022257044A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
information
interactive
target
Prior art date
Application number
PCT/CN2021/099176
Other languages
English (en)
French (fr)
Inventor
冯朋朋
踪家双
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2021/099176 priority Critical patent/WO2022257044A1/zh
Priority to CN202180001499.6A priority patent/CN115735190A/zh
Publication of WO2022257044A1 publication Critical patent/WO2022257044A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Definitions

  • This article involves but is not limited to the field of human-computer interaction technology, especially an interaction method, interaction system and electronic equipment.
  • Embodiments of the present disclosure provide an interaction method, an interaction system, and an electronic device.
  • an embodiment of the present disclosure provides an interaction method, including: when a target object is detected within the interaction range, displaying an interaction interface; in response to an operation of the target object on the interaction interface, displaying an image capture frame on the interaction interface, and capturing the target object An image to be identified of the object; based on the collected image to be identified, the identity information of the target object is obtained; and the real-time image and identity information of the target object are displayed on the interactive interface.
  • the identity information includes at least: information that the target object is in a service institution.
  • the service institution is a financial service institution
  • the information about the target object in the service institution includes at least one of the following: the financial examination result of the target object in the financial service institution, the target Object on the tab of the financial services institution.
  • the interaction method further includes: acquiring expression information of the target object based on the collected image to be recognized, and displaying the expression information of the target object on the interactive interface; wherein the expression The information includes: expression type or expression scoring result.
  • the interaction method further includes: updating the expression information displayed on the interaction interface according to the real-time image of the target object.
  • the interaction method further includes: when the expression information of the target object satisfies the set condition, combining the real-time image of the target object and the interaction information to generate an image suitable for saving or printing to the target object target photo; the interaction information at least includes: identity information or expression information of the target object.
  • the interaction method further includes: displaying a photographing control on the interaction interface; in response to the operation of the target object on the photographing control, generating a real-time image of the target object and interaction information suitable for providing to The target photo saved or printed by the target object; the interaction information at least includes: identity information or expression information of the target object.
  • the interaction information further includes: service promotion information of the service organization.
  • the interaction method further includes: after generating the target photo, displaying download information of the target photo on the interactive interface.
  • the interaction method further includes: when no target object is detected within the interaction range, displaying campaign promotion content.
  • the image acquisition frame includes a first image acquisition frame and a second image acquisition frame, and the second image acquisition frame is located in the first image acquisition frame.
  • the acquiring the identity information of the target object based on the collected image to be identified includes: cutting and compressing the collected image to be identified to obtain the target image; The target image is identified to obtain identity information.
  • an embodiment of the present disclosure provides an interaction system, including: an interaction terminal.
  • the interactive terminal is configured to display an interactive interface when a target object is detected within the interactive range, and display an image acquisition frame on the interactive interface in response to the operation of the target object on the interactive interface, and collect an image to be identified of the target object; based on the collected obtain the identity information of the target object; and display the real-time image and identity information of the target object on the interactive interface.
  • the interactive system further includes: a management server and a data server.
  • the management server is configured to process the collected image to be recognized to obtain a target image, and send the target image to the data server.
  • the data server is configured to identify the target image, obtain a recognition result, and return the recognition result to the management server.
  • the management server is further configured to return the identity information to the interaction terminal when the identification result includes identity information.
  • the interactive system further includes: a mobile server.
  • the management server is further configured to combine the real-time image of the target object and the interaction information to generate a photo of the target, and send the photo of the target to the mobile server.
  • the mobile server is configured to store the target photo, and send the download information of the target photo to the interactive terminal through the management server.
  • the interactive terminal is further configured to display the download information of the target photo on the interactive interface.
  • the interactive terminal includes: a detection processor, a display, an input processor, a camera, and an information processor.
  • the detection processor is configured to detect whether there is a target object within the interaction range.
  • the display is configured to display an interaction interface when the detection processor detects a target object in the interaction range.
  • the input processor is configured to detect the operation of the target object on the interactive interface.
  • the display is further configured to display an image acquisition frame on the interactive interface in response to the operation of the target object on the interactive interface.
  • the camera is configured to collect images of target objects to be identified.
  • the information processor is configured to transmit the collected images to be identified to the management server, and receive the identity information returned by the management server.
  • the display is further configured to display the real-time image and identity information of the target object on the interactive interface.
  • the management server includes: an image receiver, an image processor, an image sender, and a customer information processor.
  • the image receiver is configured to receive an image to be recognized from the interactive terminal.
  • the image processor is configured to process the image to be recognized to obtain a target image.
  • the image sender is configured to send the target image to the data server.
  • the customer information processor is configured to receive the identification result returned by the data server, obtain identity information from the identification result, and send the identity information to the interactive terminal.
  • an embodiment of the present disclosure further provides an electronic device, including a memory and a processor.
  • Said memory is adapted to store a computer program which, when executed by said processor, implements the steps of the interactive method as described above.
  • an embodiment of the present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, and when the computer program is executed, the steps of the above-mentioned interaction method are realized.
  • FIG. 1 is a flowchart of an interaction method in at least one embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of an interactive system of at least one embodiment of the present disclosure
  • Fig. 3 is an exemplary diagram of an interactive system according to at least one embodiment of the present disclosure.
  • Fig. 4 is an exemplary diagram of an interactive system according to at least one embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of activity promotion content in at least one embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of an interactive interface when a user has not yet participated in an interactive activity according to at least one embodiment of the present disclosure
  • Fig. 7 is a schematic diagram of an image acquisition frame displayed on an interactive interface of at least one embodiment of the present disclosure
  • FIG. 8 is a schematic interface diagram of an identity recognition process in at least one embodiment of the present disclosure.
  • Fig. 9 is a schematic diagram showing real-time images, identity information and expression information of an interactive interface according to at least one embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a target photo of at least one embodiment of the present disclosure.
  • Fig. 11 is a schematic diagram of download information of a target photo according to at least one embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of a multi-person interaction interface according to at least one embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of taking pictures of multi-person interaction in at least one embodiment of the present disclosure.
  • Fig. 14 is a schematic diagram of another interactive system according to at least one embodiment of the present disclosure.
  • FIG. 15 is a schematic diagram of an electronic device according to at least one embodiment of the present disclosure.
  • Embodiments of the present disclosure will be described in detail below in conjunction with the accompanying drawings. Embodiments may be embodied in many different forms. Those skilled in the art can easily understand the fact that the manner and contents can be changed into one or more forms without departing from the spirit and scope of the present disclosure. Therefore, the present disclosure should not be interpreted as being limited only to the contents described in the following embodiments. In the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined arbitrarily with each other.
  • connection should be interpreted in a broad sense unless otherwise specified and limited. For example, it may be a fixed connection, or a detachable connection, or an integral connection; it may be a mechanical connection, or an electrical connection; it may be a direct connection, or an indirect connection through an intermediate piece, or an internal communication between two components.
  • connection should be interpreted in a broad sense unless otherwise specified and limited. For example, it may be a fixed connection, or a detachable connection, or an integral connection; it may be a mechanical connection, or an electrical connection; it may be a direct connection, or an indirect connection through an intermediate piece, or an internal communication between two components.
  • FIG. 1 is a flowchart of an interaction method according to at least one embodiment of the present disclosure. As shown in Figure 1, the interaction method provided by at least one embodiment of the present disclosure includes the following steps:
  • Step S1 when a target object is detected within the interaction range, an interaction interface is displayed;
  • Step S2 in response to the operation of the target object on the interactive interface, display an image acquisition frame on the interactive interface, and collect the image to be identified of the target object;
  • Step S3. Obtain the identity information of the target object based on the collected image to be identified;
  • Step S4 displaying the real-time image and identity information of the target object on the interactive interface.
  • the interaction mode provided by this embodiment can combine object recognition and experience games, thereby improving the fun of human-computer interaction.
  • the interaction method in this embodiment may be executed by a terminal device having a display function.
  • the terminal device may be a welcome device (for example, a welcome robot, a welcome screen, etc.) of a financial service institution (for example, a bank).
  • the interactive method of this embodiment combines object recognition and experience games to bring users a relaxed and interesting experience, which is conducive to attracting users' attention, thereby improving the promotion of products or services effects and marketing effects.
  • this embodiment does not limit it.
  • the interaction method in this embodiment may be applied to welcome equipment or business promotion equipment of other service institutions (for example, insurance service institutions).
  • the target object may be a person.
  • the identity information may at least include: information that the target object is in a service institution.
  • the identity information may include: basic information of the target object, label of the target object in the service institution, and ranking information of the target object in the service institution.
  • the basic information of the target object may include: age, gender, etc. of the target object. However, this embodiment does not limit it.
  • the service institution may be a financial service institution (eg, a bank).
  • the information about the target object in the service institution may include at least one of the following: the wealth checkup result of the target object in the financial service institution, and the label of the target object in the financial service institution.
  • the wealth health examination result of the target object at the financial service institution may include at least one of the following: the wealth health value (or wealth health degree) of the target object at the financial service institution, wealth health value ranking information (or wealth health degree ranking information).
  • the target object's identity information may include: the target object's age, gender, tag at the registered bank, and wealth health value ranking information at the registered bank.
  • the target object can be guided to optimize asset allocation, which is beneficial to the business and product promotion of financial service institutions.
  • the service institution may be an insurance service institution, and the information about the target object in the service institution may include: the insurance health ranking information of the target object in the insurance service institution.
  • the wealth experience results of this embodiment can be obtained through wealth diagnosis or asset analysis, and this embodiment does not limit the analysis method for obtaining wealth physical examination results.
  • the interaction method of this embodiment further includes: acquiring expression information of the target object based on the collected image to be recognized, and displaying the expression information of the target object on an interactive interface.
  • the expression information may include: expression type (for example, smile) or expression scoring result (for example, smile value).
  • the interactive interface can simultaneously display the real-time image, identity information and expression information of the target object.
  • by displaying the expression information of the target object on the interactive interface it is beneficial to guide the target object to adjust the expression (for example, guide the target object to smile), thereby adjusting the mood of the target object, and increasing the interactive fun, so as to improve user experience.
  • the interaction method of this embodiment further includes: updating the expression information displayed on the interaction interface according to the real-time image of the target object.
  • the expression information in this example may change dynamically according to the dynamic change of the real-time image of the target object. That is, the expression information may change dynamically in real time.
  • by displaying dynamically changing expression information on the interaction interface it is beneficial to guide the expression of the target object in real time, thereby increasing the fun of the interaction and improving the user experience.
  • the interaction method of this embodiment may further include: when the expression information of the target object satisfies the set conditions, combining the real-time image of the target object and the interaction information to generate an image suitable for saving or printing by the target object target photo.
  • the interaction information at least includes: identity information or expression information of the target object.
  • the expression information of the target object meeting the set condition may include: the expression of the target object is a smile, and the smile value is greater than a threshold.
  • this embodiment does not limit it.
  • providing an interactive process of taking pictures as souvenirs can increase the fun of the interaction, and using expression information to automatically trigger taking pictures as souvenirs can further provide users with an easy and interesting experience and improve user experience.
  • the interaction method of this embodiment may further include: displaying a photographing control on the interactive interface, and generating an image suitable for providing to the target in response to the operation of the photographing control by the target object in combination with the real-time image of the target object and interaction information.
  • Target photo for object saving or printing the interaction information at least includes: identity information or expression information of the target object.
  • the interaction information may also include: service promotion information of the service institution.
  • the service institution is a financial service institution
  • the business promotion information may include: financial product information promoted by the financial service institution, contact information of promotion contacts, and the like.
  • this embodiment does not limit it.
  • the interaction method of this embodiment may further include: after generating the target photo, displaying download information of the target photo on the interactive interface.
  • the download information of the target image may be presented in the form of a QR code.
  • this embodiment does not limit it.
  • the interaction method of this embodiment may further include: displaying event promotion content when no target object is detected within the interaction range, where the event promotion content may include at least one of the following: event promotion posters, Event promotion video.
  • the event promotion content may include at least one of the following: event promotion posters, Event promotion video.
  • this embodiment does not limit it.
  • the image capture frame displayed in step S2 may include a first image capture frame and a second image capture frame, and the second image capture frame is located within the first image capture frame.
  • the first image capture frame may be configured to indicate the position of the user's upper body
  • the second image capture frame may be configured to indicate the position of the user's head.
  • this embodiment does not limit it.
  • acquiring the identity information of the target object based on the collected image to be recognized includes: cutting and compressing the collected image to be recognized to obtain the target image; Identify, obtain identity information.
  • the data server can be the data storage terminal of the registered user of the service organization, and identity recognition can be performed through the data server, which can make full use of the data of the registered user, simplify system design, and avoid increasing development costs.
  • FIG. 2 is a schematic diagram of an interactive system according to at least one embodiment of the present disclosure.
  • the interaction system of this embodiment may include: an interaction terminal 31 , a management server 32 and a data server 33 .
  • the management server 32 can communicate with the interactive terminal 31 and the data server 33 in a wireless or wired manner.
  • the interactive terminal 31 is configured to display an interactive interface when a target object is detected within the interactive range, and display an image capture frame on the interactive interface in response to an operation of the target object on the interactive interface, and collect an image of the target object to be recognized.
  • the management server 32 is configured to process the collected image to be recognized to obtain a target image, and send the target image to the data server 33 .
  • the data server 33 is configured to recognize the target image, obtain a recognition result, and return the recognition result to the management server 32 .
  • the management server 32 is configured to return the identity information to the interactive terminal 31 when the identification result includes the identity information.
  • the interactive terminal 31 is also configured to display the real-time image and identity information of the target object on the interactive interface.
  • the interactive terminal 31 may be an electronic device, for example, a welcome robot; the management server 32 and the data server 33 may be servers.
  • the data server 33 may be a customer relationship management (CRM, Customer Relationship Management) server of a service institution.
  • the management server 32 may be a management server of multiple interactive terminals 31 .
  • this embodiment does not limit it.
  • Fig. 3 is another exemplary diagram of an interactive system according to at least one embodiment of the present disclosure.
  • the interaction system of this embodiment may include: an interaction terminal 31 , a management server 32 , a data server 33 and a mobile server 34 .
  • the management server 32 can communicate with the interactive terminal 31 , the data server 33 and the mobile server 34 in a wireless or wired manner.
  • the management server 32 is further configured to combine the real-time image of the target object and the interaction information to generate a photo of the target, and send the photo of the target to the mobile server 34 .
  • the mobile server 34 is configured to store the target photo, and send the download information of the target photo to the interactive terminal 31 through the management server 32 .
  • the interactive terminal 31 is also configured to display the download information of the target photo on the interactive interface.
  • the client for example, the user application program (APP, Application) provided by the service organization
  • the mobile terminal for example, mobile phone
  • Fig. 4 is another exemplary diagram of an interactive system according to at least one embodiment of the present disclosure.
  • the interactive system of this embodiment includes: an interactive terminal 31 , a management server 32 , a data server 33 , a mobile server 34 and a printing device 35 .
  • the management server 32 can communicate with the interactive terminal 31 , the data server 33 , the mobile server 34 and the printing device 35 in a wireless or wired manner.
  • the mobile server 34 when the user opens the target photo through the client of the mobile terminal (for example, mobile phone) and selects to print the target photo, the mobile server 34 will transmit the print instruction and photo information To the management server 32, the management server 32 transmits the printing instruction and the photo information to the printing device 35, so that the printing device 35 executes the printing instruction according to the photo information.
  • the user can get the photo paper version of the target photo from the printing device 35 .
  • the interactive terminal 31 and the printing device 35 can be placed in the lobby of the financial service institution, and the management server 32, the data server 33 and the mobile server 34 can be servers deployed in different places to ensure that the data safety.
  • this embodiment does not limit it.
  • the interactive terminal 31 may include: a display 311 , a detection processor 312 , a camera 313 , an information processor 314 and an input processor 315 .
  • the detection processor 312 is configured to detect whether there is a target object within the interaction range.
  • the display 311 is configured to display an interaction interface when the detection processor 312 detects a target object in the interaction range.
  • the input processor 315 is configured to detect the operation of the target object on the interaction interface.
  • the display 311 is further configured to display an image acquisition frame on the interactive interface in response to the operation of the target object on the interactive interface.
  • the camera 313 is configured to collect images of the target object to be recognized.
  • the information processor 314 is configured to transmit the collected images to be identified to the management server 32 and receive the identity information returned by the management server 32 .
  • the display 311 is also configured to display the real-time image and identity information of the target object on the interactive interface.
  • the display 311 is adapted to provide an interactive interface.
  • the detection processor 312 is adapted to detect target objects within the interaction range.
  • the camera 313 is adapted to capture images.
  • the input processor 315 is adapted to detect operations on the interactive terminal.
  • the information processor 314 is adapted for information processing and transmission.
  • the display 311 , the detection processor 312 , the camera 313 , the information processor 314 and the input processor 315 can be connected through a bus.
  • the structure of the interactive terminal shown in FIG. 4 does not constitute a limitation on the interactive terminal, and the interactive terminal may include more or less components than shown in the illustration, or combine certain components, or provide Different component arrangements.
  • the information processor 314 may include, but not limited to, a processing device such as a microprocessor (MCU, Microcontroller Unit) or a programmable logic device (FPGA, Field Programmable Gate Array).
  • MCU Microprocessor
  • FPGA Field Programmable Gate Array
  • the information processor 314 executes various functional applications and data processing by running stored software programs and modules.
  • the information processor 314 may also include communication devices such as communication circuits, so as to realize wireless or wired communication with the server.
  • the input processor 315 may be adapted to receive input information.
  • the input processor 315 may include a touch panel (or called a touch screen) and other input devices (such as a mouse, a keyboard, a joystick, etc.).
  • the display 311 may be adapted to display information entered by the user or provided to the user.
  • the display 311 may include a display panel, such as a liquid crystal display, an organic light emitting diode display panel, and the like.
  • the touch panel can be covered on the display panel. When the touch panel detects a touch operation on or near it, it transmits to the information processor 314 to determine the type of the touch event, and then the information processor 314 The type of event provides a corresponding visual output on the display panel.
  • the touch panel and the display panel can be used as two independent components to implement the input and output functions of the interactive terminal, or the touch panel and the display panel can be integrated together to implement the input and output functions. However, this embodiment does not limit it.
  • the detection processor 312 may include: an optical sensor.
  • the optical sensor can be an infrared sensor.
  • the detection processor 312 can detect whether there is a target object within the interaction range by emitting infrared rays.
  • this embodiment does not limit it.
  • the camera 313 may directly acquire an image of the target object, and transmit the image to the information processor 314 for processing.
  • the management server 32 may include: an image receiver 321 , an image processor 322 , an image sender 323 and a customer information processor 324 .
  • the image receiver 321 is configured to receive an image to be recognized from the interactive terminal 31 .
  • the image processor 322 is configured to process the image to be recognized to obtain the target image.
  • the image transmitter 323 is configured to send the target image to the data server 33 .
  • the customer information processor 324 is configured to receive the identification result returned by the data server 33 , obtain identity information from the identification result, and send the identity information to the interactive terminal 31 .
  • this embodiment does not limit it.
  • the image receiver 321 and the image transmitter 323 may include communication devices such as communication circuits, so as to realize wireless or wired communication with interactive terminals and other servers.
  • the image processor 322 may include a processing device such as a microprocessor or a programmable logic device, so as to perform data processing.
  • the customer information processor 324 may include processing devices such as microprocessors or programmable logic devices, and may also include communication devices such as communication circuits for data processing and transmission. However, this embodiment does not limit it.
  • the interaction method of this embodiment will be illustrated below based on the interaction system shown in FIG. 4 .
  • the following is an example of an interactive terminal set up at a bank branch for welcoming guests.
  • the interactive terminal may be a welcome robot of a bank.
  • the management server may be a server that provides a management platform for the interactive terminals, so as to manage multiple interactive terminals.
  • the data server can be a bank's CRM server configured to store registered user information of the bank.
  • the mobile server can provide a service platform for interacting with user clients.
  • the printing device can be set at the bank outlet and close to the interactive terminal, so that the user can take the printed photo paper version of the target photo.
  • this embodiment does not limit it.
  • the interaction terminal may detect in real time whether there is a target object (that is, a user) within the interaction range through a detection processor (eg, an optical sensor such as an infrared sensor).
  • a detection processor eg, an optical sensor such as an infrared sensor
  • the interaction device may use the infrared sensor to emit infrared light to detect the target object, wherein the interaction range may be the detection range of the infrared sensor.
  • a detection processor of an interaction device may include an acoustic sensor.
  • the interaction terminal when the interaction terminal does not detect the target object within the interaction range, the interaction terminal may display activity promotion content.
  • FIG. 5 is a schematic diagram of activity promotion content according to at least one embodiment of the present disclosure.
  • the activity promotion content displayed on the interactive terminal can be an activity promotion poster, and the activity promotion poster can include activity promotion information (for example, "Smile to solve a thousand worries, and be with you!) and activity participation methods (for example, "Please move to the best photo area, adjust your posture, and keep smiling").
  • Event promotion posters can also include patterns such as cartoon images of service agencies. However, this embodiment does not limit it.
  • the promotional event content may include a promotional event video.
  • the event promotion video can be played in a loop.
  • the event promotion content may include event promotion posters and event promotion videos.
  • the interactive terminal attracts users to participate in the interactive activity by displaying activity promotion content.
  • Fig. 6 is a schematic diagram of an interactive interface when a user has not yet participated in an interactive activity according to at least one embodiment of the present disclosure.
  • the interactive interface may display an interactive start control, such as a "start experience" button.
  • the interactive interface can also display the welcome words of the interactive activities (for example, "Welcome to experience the "Interactive Game Screen”", and the theme name of the interactive activities (for example, "Smart Bank Experience Tour”), etc.
  • This embodiment does not limit it.
  • the interaction interface of the interaction terminal in response to the user's operation on the interaction activation control (for example, the user's single-click or double-click operation on the interaction activation control on the interaction interface), the interaction interface of the interaction terminal will pop up a privacy agreement floating window, wherein, A consent control may be displayed in the floating window of the privacy agreement.
  • a consent control may be displayed in the floating window of the privacy agreement.
  • an image capture frame may be displayed on the interactive interface.
  • the image collection is performed after the user's authorization is obtained, which fully respects the user's privacy.
  • the interactive interface may display an image collection frame, and collect the user's image to be identified through the camera.
  • Fig. 7 is a schematic diagram of an image acquisition frame displayed on an interactive interface according to at least one embodiment of the present disclosure.
  • the image capture frame displayed on the interactive interface may include: a first image capture frame A1 and a second image capture frame A2.
  • the second image capture frame A2 is located within the first image capture frame A1.
  • the first image capture frame A1 may be located in a middle area of the interactive interface, and the first image capture frame A1 may be a rectangle.
  • the second image acquisition frame A2 may be located in the middle area of the first image acquisition frame A1, and the second image acquisition frame A2 is, for example, oval.
  • the first image capture frame A1 may indicate the image capture area of the user's upper body, and the user may move the position of the upper body image into the first image capture frame A1.
  • the second image capture frame A2 may indicate the image capture area of the user's head, and the user may move the position of the user's head image into the second image capture frame A2.
  • the first image collection frame A1 and the second image collection frame A2 may limit the range of the user's image to be collected, so as to increase the success probability of subsequent identification.
  • text prompt information for image acquisition may also be displayed on the interactive interface.
  • the text prompt information may be: "Please adjust your posture to enter the effective range of face recognition".
  • only the first image acquisition frame may be displayed on the interactive interface. However, this embodiment does not limit it.
  • the interactive terminal may collect the image to be recognized of the user through a camera.
  • the interactive terminal can transmit the collected image to be identified to the management server, and the management server can obtain the target image after processing the image to be identified, and send the target image to the data server, so that the data server can identify the target image based on the target image. identify.
  • the processing of the image to be recognized by the management server may include: cutting the image to be recognized according to the first image capture frame, cutting off the image content outside the first image capture frame, and compressing the cropped image to obtain target image.
  • the size of the target image may be less than or equal to 1 megabyte (M).
  • M megabyte
  • the interactive terminal can process (for example, include cropping and compressing) the collected image to be recognized to obtain the target image, and directly transmit the target image to the data server, so that the data server can use the target image based on the for identification.
  • the recognition progress and prompt information may be displayed on the interactive interface of the interactive terminal.
  • Fig. 8 is a schematic interface diagram of the identity recognition process of at least one embodiment of the present disclosure. As shown in FIG. 8 , the interactive interface displays a prompt message (for example, "Face recognition is in progress"), and a progress bar of the face recognition, for example, the progress of the face recognition is represented by a percentage. However, this embodiment does not limit it.
  • the data server stores the registration information of the user in the service institution (for example, a bank) (for example, including: the user's name, age, gender, occupation, picture and other basic information), and the user's registration information in the service institution.
  • business information for example, including the label of the user in the service institution, the wealth and health value of the user in the service institution, the ranking information of wealth and health value, etc.
  • this embodiment does not limit it.
  • the data server can match the target image according to the database of registered users to obtain the recognition result.
  • the data server can obtain an identification result including the user's identity information, and transmit the identification result to the management server.
  • the identification result may include: the basic information of the user (such as: name, age, gender, occupation, etc.), the label of the user in the service institution, the wealth and health value of the user in the service institution, and the ranking information of the wealth and health value .
  • This embodiment does not limit the facial recognition algorithm adopted by the data server.
  • the management server may store and organize the identification result, and transmit the identity information to the interactive terminal.
  • identity information may include: user gender, age, tags, and wealth and health ranking signals. However, this embodiment does not limit it.
  • the data server when it does not identify a user matching the target image in the database of registered users, it may return a recognition result that does not include identity information to the management server. That is, the recognition result shows that the target object in the target image has not been registered with the service agency.
  • the management server receives the identification result returned by the data server indicating that the target object is a non-registered user, it can transmit the identification result to the interactive terminal, and the interactive terminal can display registration prompt information on the interactive interface.
  • the interactive interface may display the following text "welcome registered users to obtain professional wealth and health diagnosis".
  • the user in response to the user's operation on the interactive interface, the user may be further guided to register, so as to become a registered user and then participate in the interactive activity.
  • this embodiment does not limit it.
  • the interactive terminal can perform expression recognition and evaluation based on the image to be recognized, and obtain the expression information of the target object.
  • the expression information may include: expression type and expression scoring result.
  • a smiley expression and a corresponding smile value can be identified. This embodiment does not limit the facial expression recognition algorithm adopted by the interactive terminal.
  • the interaction terminal may display the real-time image, identity information, and expression information of the target object on the interaction interface.
  • Fig. 9 is a schematic diagram of displaying real-time images, identity information and expression information of an interactive interface according to at least one embodiment of the present disclosure.
  • the interactive interface may have a first display area 101 and a third display area 103 .
  • the third display area 103 may be located at one side (eg, left side) of the first display area 101 .
  • a second display area 102 is suspended in the first display area 101 , and the second display area 102 is located in the lower half of the first display area 101 .
  • the first display area 101 can display the real-time image of the user
  • the second display area 102 can display the user's identity information and expression information
  • the third display area 103 can display the participation situation of the interactive activity (such as including: the number of participants, the number of men and women of the number of participants) ratio, and age distribution).
  • a camera control 201 and an exit control 202 may also be displayed on the interactive interface.
  • the camera control 201 and the exit control 202 can be floatingly displayed in the first display area 101 and located on opposite sides of the second display area 102 .
  • the camera control 201 is located on the left side of the second display area 102
  • the exit control 202 is located on the right side of the second display area 102 .
  • this embodiment does not limit it.
  • the second display area 102 can display the user's age (for example, 27), gender (for example, male), label (for example, winner in life), wealth and health value ranking information (for example, your wealth and health value beats 80% of the people in our country) and smile value (for example, 88).
  • user tags and wealth and health value ranking information are arranged along a first direction (for example, a vertical direction)
  • gender and age are arranged along a first direction
  • smile values are arranged in a second direction (for example, a horizontal direction) Between label and gender. Wherein, the first direction is perpendicular to the second direction.
  • this embodiment does not limit the arrangement position of the display information in the second display area 102 .
  • users can be attracted to forward the result of this interaction, thereby attracting more users to participate in the interaction. Moreover, it can stimulate users' interest in improving the ranking, thereby attracting users to optimize asset allocation. In this example, you can add some fun by showing the smile value.
  • the real-time image of the first display area 101 is obtained by the interactive terminal using a camera in real time.
  • the interactive terminal may update the smile value displayed on the second display area 102 according to the real-time image obtained in real time.
  • the interactive terminal can dynamically change the smile value of the second display area 102 by detecting and recognizing the user's expression in real time, so as to guide the user to smile, adjust the user's mood, and increase the fun of the interaction.
  • the smile value leaderboard can be displayed in the third display area 103, and the smile value ranking can be displayed in the smile value leaderboard as the top N (for example, N is 3) user avatars and smile values .
  • this embodiment does not limit it.
  • the interactive terminal in response to the user's operation on the camera control 201 (for example, a single-click or double-click operation), can collect a real-time image of the user, and combine the real-time image and interaction information to generate a commemorative photo.
  • the interaction information may include: user identity information and expression information.
  • the interactive terminal combines the collected real-time images and interactive information to generate a souvenir photo, and transmits it to the management server, and the management server processes the souvenir photo to obtain the target photo.
  • the target photo may include: real-time images and interaction information of the user.
  • the interaction information may include the user's identity information, expression information, and business promotion information.
  • FIG. 10 is a schematic diagram of a target photo according to at least one embodiment of the present disclosure.
  • the target photo may include: a picture display area 105 and an information display area 104 .
  • the picture display area 105 has the user's real-time image, identity information and expression information
  • the information display area 104 has business promotion information.
  • the business promotion information may include: bank promotional speeches, promotional products, QR codes of wealth management managers, names of wealth management managers, avatars of bank intelligent assistants, and the like.
  • this embodiment does not limit it.
  • the management server after processing the target photo, sends the target photo to the mobile server.
  • the mobile server downloads and saves the target photo, it generates download information and returns it to the management server.
  • the management server sends the download information to the interactive terminal.
  • the interactive terminal can display the download information on the interactive interface.
  • Fig. 11 is a schematic diagram of displaying download information in at least one embodiment of the present disclosure.
  • the interactive interface includes a preview display area and a download information display area. A souvenir photo is displayed in the preview display area.
  • the download information display area displays download information.
  • the download information is displayed in the form of a QR code, and a text prompt is displayed near the download information (for example, "Please use WeChat/mobile banking to scan the QR code to download and save the picture”).
  • Printing prompt information is also displayed on the interactive interface (for example, "Please move to the service area and use the photo printer to print photos").
  • the user can scan the QR code displayed on the interactive interface through the bank client in the mobile terminal (eg, mobile phone) to open the webpage link, obtain the target photo, and then can choose to download or print the target photo.
  • the target photo can be saved in the photo album of the user's mobile terminal; when the user selects the print command, the mobile server can transmit the print command and photo information to the management server, and the management server can print the The instruction and photo information are transmitted to the printing device, and the printing device executes the printing instruction to obtain the target photo of the photo paper version.
  • the user can go to the printing device placed in the bank service area to take the target photo of the photo paper version.
  • the download information provided by the mobile server may be valid only once to protect user privacy.
  • the QR code displayed on the interactive interface is valid for a single scan. After the user scans the QR code displayed on the interactive terminal once through the mobile terminal, the mobile server can put the QR code displayed on the interactive terminal into an invalid state.
  • the mobile server in order to protect user privacy, after the user scans the QR code through the mobile terminal, the mobile server can send a verification code to the mobile phone number bound to the user, and the user needs to enter the verification code on the mobile terminal to download the target photo.
  • this embodiment does not limit it.
  • the interaction process in response to a user's operation on the exit control 202 (for example, a single-click or double-click operation), the interaction process can be exited.
  • the interaction terminal may exit the interaction process if no user operation is detected within a set period of time.
  • the interactive terminal may also exit the interactive process if no user operation is detected within a set period of time.
  • this embodiment does not limit it.
  • the combination of object recognition and experience games can bring users a relaxed and interesting experience, and can improve the publicity effect of the service organization while improving the user experience. For example, by displaying the wealth and health value rankings on the interactive interface, users can be attracted to retweet, thereby attracting more participants, and can stimulate users' interest in improving the rankings, so as to guide users to optimize asset allocation. By displaying the dynamic change of the smile value, it can guide the user to smile, adjust the user's mood, and increase the fun.
  • the interaction system can realize simultaneous interaction with multiple people.
  • the interaction terminal when the interaction terminal detects at least one target object within the interaction range, it displays the interaction interface.
  • at least two image acquisition frames may be displayed on the interactive interface, and at least two images to be identified of the target object may be collected.
  • the interactive terminal may sequentially perform image acquisition and identification of two target objects.
  • the interactive interface can display the newly added identification object control and the interactive control; in response to the operation of the target object on the newly added identification object control, the interactive interface can again Displays the image acquisition box and acquires image information for the second object of interest.
  • the interactive interface can display a real-time image of the target object.
  • the interactive terminal can simultaneously collect images and identify two target objects.
  • the interactive interface can display two image acquisition frames at the same time, and simultaneously complete image acquisition and identification for two target objects.
  • this embodiment does not limit it.
  • the interactive terminal may display the real-time images, identity information, and expression information of at least two target objects on the interactive interface.
  • Fig. 12 is a schematic diagram of a multi-person interaction interface according to at least one embodiment of the present disclosure.
  • the interactive interface has a group photo area 107 , a comprehensive information display area 106 and a smile progress bar 108 .
  • the group photo area 107 displays the real-time images, identity information and expression information of the two target objects.
  • the identity information of each target object may include: gender, age and label
  • the expression information may include: smile value.
  • the comprehensive information display area 106 may include: the number of participants in the interactive activity (for example, 83719), the ratio of male to female among the number of participants, the age distribution of the number of participants, and the ranking list of smile values.
  • the smile value leaderboard shows the top 3 user avatars and their corresponding smile values.
  • the comprehensive information display area 106 is located on one side (for example, left side) of the group photo area 107 .
  • the smile progress bar 108 can be floatingly displayed in the group photo area 107 and located on a side away from the comprehensive information display area 106 .
  • the smile progress bar 108 may indicate the smile value of a single target object, or may indicate the total smile value of multiple target objects. However, this embodiment does not limit it.
  • an automatic photo-taking may be triggered.
  • the interactive interface of the interactive terminal may enter the photographing state as shown in FIG. 13 .
  • the interactive interface prompts to prepare to take a photo, and displays a group photo of two target objects, as well as their respective identity information (for example, gender, age and label) and expression information (for example, smile value) of the two target objects.
  • a cartoon image of the service institution may be displayed on the interaction interface to enhance the fun of the interaction.
  • the smile progress bar may indicate the total smile value of multiple target objects, and when the total smile value reaches a threshold, an automatic photo taking is triggered.
  • the identity information and expression information of the target object may be displayed in a hover near the image of the target object to indicate which target object the identity information and expression information belong to.
  • the gender, age, and smile values of the target object may be displayed floating above the head of the target object, and the label may be displayed floating below the head of the target object.
  • this embodiment does not limit it.
  • the cartoon image of the service organization can be displayed in Fig. 12 and Fig. 13 to increase the fun of the interaction.
  • the cartoon image of the service organization may have various postures, for example, postures for taking pictures, postures for reading books, postures for diving, and so on. This embodiment does not limit it.
  • FIG. 9 to FIG. 13 in order to protect the user's privacy, the user's image is shielded, and only an oval frame is used to roughly indicate the position of the user's head.
  • the cartoon image of the service institution is covered, and only the gray rectangular frame is used to roughly indicate the position of the cartoon image.
  • information such as a two-dimensional code is blocked by a rectangular frame with white dots on a black background.
  • the management server can collect the user's experience behavior information (for example, user experience times, experience duration, etc.) from the interactive terminal, and analyze the user's experience behavior information Statistical analysis is carried out to obtain the popularity of the interactive activity, which is provided to the management personnel of the service organization as a reference for decision-making.
  • the user's experience behavior information for example, user experience times, experience duration, etc.
  • Statistical analysis is carried out to obtain the popularity of the interactive activity, which is provided to the management personnel of the service organization as a reference for decision-making.
  • this embodiment does not limit it.
  • FIG. 14 is a schematic diagram of an interactive system according to at least one embodiment of the present disclosure.
  • the interaction system of this embodiment may include: an interaction terminal 41 , a data server 42 and a mobile server 43 .
  • the interactive terminal 41 can communicate with the data server 42 and the mobile server 43 in a wired or wireless manner.
  • the interactive terminal 41 is configured to display an interactive interface when a target object is detected within the interactive range, display an image acquisition frame on the interactive interface in response to the operation of the target object on the interactive interface, and collect an image to be recognized of the target object.
  • the interactive terminal 41 is also configured to process the collected image to be recognized to obtain a target image, and send the target image to the data server 42 .
  • the data server 42 is configured to recognize the target image, obtain a recognition result, and return the recognition result to the interactive terminal 41 .
  • the interaction terminal 41 is configured to display the real-time image and identity information of the target object on the interaction interface when the recognition result includes identity information.
  • the interaction terminal 41 is further configured to generate a photo of the target by combining the real-time image of the target object and the interaction information, and send the photo of the target to the mobile server 43 .
  • the mobile server 43 is configured to store the target photo, and transmit the download information of the target photo to the interactive terminal 41 .
  • the interactive terminal 41 directly communicates with the data server 42 and the mobile server 43 .
  • the interactive terminal 41 may include: a detection processor, a display, an input processor, a camera, and an information processor.
  • a detection handler configured to detect the presence of a target object within the interaction range.
  • the display is configured to display an interaction interface when the detection processor detects a target object in the interaction range.
  • the input processor is configured to input the operation of the target object on the interactive interface.
  • the display is further configured to display an image acquisition frame on the interactive interface in response to the operation of the target object on the interactive interface.
  • the camera is configured to capture images of the target object to be recognized.
  • the information processor is configured to process the collected image to be recognized, obtain the target image, transmit the target image to the data server, and receive the recognition result returned by the data server.
  • the information processor is also configured to acquire expression information according to the collected images.
  • At least one embodiment of the present disclosure further provides an electronic device, including: a memory and a processor.
  • the memory is adapted to store a computer program which, when executed by the processor, implements the steps of the interaction method described above.
  • FIG. 15 is a schematic diagram of an electronic device according to at least one embodiment of the present disclosure.
  • the electronic device of this embodiment includes: a processor 501 and a memory 502 .
  • the processor 501 and the memory 502 can be connected through a bus.
  • the memory 502 is suitable for storing a computer program, and when the computer program is executed by the processor 501, the steps of the interaction method provided by the above-mentioned embodiments are implemented.
  • the processor 501 may include a processing device such as an MCU or an FPGA.
  • the memory 502 may store software programs and modules of application software, such as program instructions or modules corresponding to the interaction method in this embodiment.
  • the processor 501 executes various functional applications and data processing by running software programs and modules stored in the memory 502 , such as implementing the interaction method provided in this embodiment.
  • the memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 502 may include a memory that is remotely located relative to the processor 501, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • At least one embodiment of the present disclosure further provides a non-transitory computer-readable storage medium storing a computer program, and when the computer program is executed, the steps of the above interaction method are implemented.
  • the functional modules or units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
  • the division between functional modules or units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种交互方法,包括:在交互范围内检测到目标对象时,显示交互界面;响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像;基于采集到的待识别图像,获取目标对象的身份信息;在交互界面显示目标对象的实时图像和身份信息。

Description

交互方法、交互系统及电子设备 技术领域
本文涉及但不限于人机交互技术领域,尤指一种交互方法、交互系统及电子设备。
背景技术
随着科技的不断发展,计算机将越来越广泛地应用于多个领域。如何利用人机交互来提高不同场景下的用户体验是一个重点。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本公开实施例提供一种交互方法、交互系统及电子设备。
一方面,本公开实施例提供一种交互方法,包括:在交互范围内检测到目标对象时,显示交互界面;响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像;基于采集到的待识别图像,获取目标对象的身份信息;在所述交互界面显示所述目标对象的实时图像和身份信息。
在一些示例性实施方式中,所述身份信息至少包括:所述目标对象在服务机构的信息。
在一些示例性实施方式中,所述服务机构为金融服务机构,所述目标对象在服务机构的信息包括以下至少之一:所述目标对象在所述金融服务机构的财富体检结果、所述目标对象在所述金融服务机构的标签。
在一些示例性实施方式中,交互方法还包括:基于采集到的待识别图像,获取所述目标对象的表情信息,并在所述交互界面显示所述目标对象的表情信息;其中,所述表情信息包括:表情类型或表情评分结果。
在一些示例性实施方式中,交互方法还包括:根据所述目标对象的实时 图像,更新所述交互界面上显示的表情信息。
在一些示例性实施方式中,交互方法还包括:在所述目标对象的表情信息满足设定条件时,结合所述目标对象的实时图像和交互信息生成适于提供给所述目标对象保存或打印的目标照片;所述交互信息至少包括:所述目标对象的身份信息或表情信息。
在一些示例性实施方式中,交互方法还包括:在所述交互界面显示拍照控件;响应于所述目标对象对拍照控件的操作,结合所述目标对象的实时图像和交互信息生成适于提供给所述目标对象保存或打印的目标照片;所述交互信息至少包括:所述目标对象的身份信息或表情信息。
在一些示例性实施方式中,所述交互信息还包括:服务机构的业务推广信息。
在一些示例性实施方式中,交互方法还包括:在生成所述目标照片之后,在所述交互界面显示所述目标照片的下载信息。
在一些示例性实施方式中,交互方法还包括:在交互范围内未检测到目标对象时,显示活动推广内容。
在一些示例性实施方式中,所述图像采集框包括第一图像采集框和第二图像采集框,所述第二图像采集框位于第一图像采集框内。
在一些示例性实施方式中,所述基于采集到的待识别图像,获取所述目标对象的身份信息,包括:对采集到的待识别图像进行裁切和压缩,得到目标图像;通过数据服务端对所述目标图像进行识别,获取身份信息。
另一方面,本公开实施例提供一种交互系统,包括:交互终端。所述交互终端配置为在交互范围内检测到目标对象时,显示交互界面,响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像;基于采集到的待识别图像,获取目标对象的身份信息;以及在所述交互界面显示所述目标对象的实时图像和身份信息。
在一些示例性实施方式中,交互系统还包括:管理服务端和数据服务端。所述管理服务端配置为对采集到的待识别图像进行处理,得到目标图像,并将所述目标图像发送给所述数据服务端。所述数据服务端配置为对所述目标 图像进行识别,得到识别结果,并将所述识别结果返回给所述管理服务端。所述管理服务端还配置为在所述识别结果包括身份信息时,将所述身份信息返回给所述交互终端。
在一些示例性实施方式中,交互系统还包括:移动服务端。所述管理服务端还配置为结合所述目标对象的实时图像和交互信息生成目标照片,并将所述目标照片发送给所述移动服务端。所述移动服务端配置为存储所述目标照片,并将所述目标照片的下载信息通过所述管理服务端发送给所述交互终端。所述交互终端还配置为在所述交互界面显示所述目标照片的下载信息。
在一些示例性实施方式中,所述交互终端包括:检测处理器、显示器、输入处理器、摄像头以及信息处理器。所述检测处理器,配置为检测交互范围内是否存在目标对象。所述显示器,配置为在所述检测处理器在交互范围检测到目标对象时,显示交互界面。所述输入处理器,配置为检测所述目标对象在所述交互界面的操作。所述显示器,还配置为响应于所述目标对象在交互界面的操作,在所述交互界面显示图像采集框。所述摄像头,配置为采集目标对象的待识别图像。所述信息处理器,配置为将采集到的待识别图像传输给管理服务端,并接收管理服务端返回的身份信息。所述显示器还配置为在所述交互界面显示所述目标对象的实时图像和身份信息。
在一些示例性实施方式中,所述管理服务端,包括:图像接收器、图像处理器、图像发送器以及客户信息处理器。所述图像接收器,配置为从所述交互终端接收待识别图像。所述图像处理器,配置为对所述待识别图像进行处理,得到目标图像。所述图像发送器,配置为将所述目标图像发送给所述数据服务端。所述客户信息处理器,配置为接收所述数据服务端返回的识别结果,从所述识别结果得到身份信息,并将所述身份信息发送给所述交互终端。
另一方面,本公开实施例还提供一种电子设备,包括存储器和处理器。所述存储器适于存储计算机程序,所述计算机程序被所述处理器执行时实现如上所述的交互方法的步骤。
另一方面,本公开实施例还提供一种非瞬态计算机可读存储介质,存储有计算机程序,所述计算机程序被执行时实现如上所述的交互方法的步骤。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图说明
附图用来提供对本公开技术方案的进一步理解,并且构成说明书的一部分,与本公开的实施例一起用于解释本公开的技术方案,并不构成对本公开的技术方案的限制。附图中一个或多个部件的形状和大小不反映真实比例,目的只是示意说明本公开内容。
图1为本公开至少一实施例的交互方法的流程图;
图2为本公开至少一实施例的交互系统的示意图;
图3为本公开至少一实施例的交互系统的一种示例图;
图4为本公开至少一实施例的交互系统的一种示例图;
图5为本公开至少一实施例的活动推广内容的示意图;
图6为本公开至少一实施例的用户尚未参与交互活动时的交互界面的示意图;
图7为本公开至少一实施例的交互界面显示的图像采集框的示意图;
图8为本公开至少一实施例的身份识别过程的界面示意图;
图9为本公开至少一实施例的交互界面的实时图像、身份信息和表情信息的显示示意图;
图10为本公开至少一实施例的目标照片的示意图;
图11为本公开至少一实施例的目标照片的下载信息的示意图;
图12为本公开至少一实施例的多人交互的界面示意图;
图13为本公开至少一实施例的多人交互的拍照示意图;
图14为本公开至少一实施例的另一交互系统的示意图;
图15为本公开至少一实施例的电子设备的一种示意图。
具体实施方式
下面将结合附图对本公开实施例进行详细说明。实施方式可以以多个不同形式来实施。所属技术领域的普通技术人员可以很容易地理解一个事实,就是方式和内容可以在不脱离本公开的宗旨及其范围的条件下被变换为一种或多种形式。因此,本公开不应该被解释为仅限定在下面的实施方式所记载的内容中。在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互任意组合。
在附图中,有时为了明确起见,夸大表示了一个或多个构成要素的大小、层的厚度或区域。因此,本公开的一个方式并不一定限定于该尺寸,附图中多个部件的形状和大小不反映真实比例。此外,附图示意性地示出了理想的例子,本公开的一个方式不局限于附图所示的形状或数值等。
本公开中的“第一”、“第二”、“第三”等序数词是为了避免构成要素的混同而设置,而不是为了在数量方面上进行限定的。本公开中的“多个”表示两个或两个以上的数量。
在本公开中,为了方便起见,使用“中部”、“上”、“下”、“前”、“后”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示方位或位置关系的词句以参照附图说明构成要素的位置关系,仅是为了便于描述本说明书和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。构成要素的位置关系根据描述构成要素的方向适当地改变。因此,不局限于在说明书中说明的词句,根据情况可以适当地更换。
在本公开中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解。例如,可以是固定连接,或可拆卸连接,或一体地连接;可以是机械连接,或电连接;可以是直接相连,或通过中间件间接相连,或两个元件内部的连通。对于本领域的普通技术人员而言,可以根据情况理解上述术语在本公开中的含义。
图1为本公开至少一实施例的交互方法的流程图。如图1所示,本公开至少一实施例提供的交互方法,包括以下步骤:
步骤S1、在交互范围内检测到目标对象时,显示交互界面;
步骤S2、响应于目标对象在交互界面的操作,在交互界面显示图像采集 框,并采集目标对象的待识别图像;
步骤S3、基于采集到的待识别图像,获取目标对象的身份信息;
步骤S4、在交互界面显示目标对象的实时图像和身份信息。
本实施例提供的交互方式可以将对象识别和体验游戏结合,从而提高人机交互的趣味性。
在一些示例性实施方式中,本实施例的交互方法可以由具有显示功能的终端设备执行。例如,终端设备可以为金融服务机构(比如,银行)的迎宾设备(例如,迎宾机器人、迎宾屏幕等)。相较于银行网点的人工迎宾的传统方式,本实施例的交互方法通过将对象识别和体验游戏将结合,带给用户轻松有趣的体验,有利于吸引用户关注,从而提升产品或业务的推广效果和营销效果。然而,本实施例对此并不限定。例如,本实施例的交互方法可以应用于其他服务机构(例如,保险服务机构)的迎宾设备或业务推广设备。
在一些示例性实施方式中,目标对象可以为人物。
在一些示例性实施方式中,身份信息至少可以包括:目标对象在服务机构的信息。在一些示例中,身份信息可以包括:目标对象的基本信息、目标对象在服务机构的标签以及目标对象在服务机构的排行信息。目标对象的基本信息可以包括:目标对象的年龄、性别等。然而,本实施例对此并不限定。
在一些示例性实施方式中,服务机构可以为金融服务机构(比如,银行)。目标对象在服务机构的信息可以包括以下至少之一:目标对象在金融服务机构的财富体检结果、目标对象在金融服务机构的标签。例如,目标对象在金融服务机构的财富体检结果可以包括以下至少之一:目标对象在金融服务机构的财富健康值(或财富健康度)、财富健康值排行信息(或财富健康度排行信息)。在一些示例中,目标对象的身份信息可以包括:目标对象的年龄、性别、在所注册银行的标签、以及在所注册银行的财富健康值排行信息。在本示例中,通过在交互界面显示目标对象的财富健康值排行信息,可以引导目标对象进行资产配置优化,有利于金融服务机构的业务和产品推广。然而,本实施例对此并不限定。例如,服务机构可以为保险服务机构,则目标对象在服务机构的信息可以包括:目标对象在保险服务机构的保险健康度排行信息。另外,本实施例的财富体验结果可以通过财富诊断或资产分析等方式得 到,本实施例对于得到财富体检结果的分析方式并不限定。
在一些示例性实施方式中,本实施例的交互方法还包括:基于采集到的待识别图像,获取目标对象的表情信息,并在交互界面显示目标对象的表情信息。其中,表情信息可以包括:表情类型(例如,微笑)或表情评分结果(例如,微笑值)。在本示例中,交互界面可以同时显示目标对象的实时图像、身份信息和表情信息。本示例性实施方式中,通过在交互界面显示目标对象的表情信息,有利于引导目标对象调整表情(例如,引导目标对象进行微笑),从而调整目标对象的心情,并增加交互趣味性,以提高用户体验。
在一些示例性实施方式中,本实施例的交互方法还包括:根据目标对象的实时图像,更新交互界面上显示的表情信息。本示例中的表情信息可以根据目标对象的实时图像的动态变化而动态变化。即,表情信息可以是实时动态变化的。本示例性实施方式中,通过在交互界面显示动态变化的表情信息,有利于实时对目标对象的表情进行引导,从而增加交互趣味性,提高用户体验。
在一些示例性实施方式中,本实施例的交互方法还可以包括:在目标对象的表情信息满足设定条件时,结合目标对象的实时图像和交互信息生成适于提供给目标对象保存或打印的目标照片。其中,交互信息至少包括:目标对象的身份信息或表情信息。在一些示例中,目标对象的表情信息满足设定条件可以包括:目标对象的表情为微笑,且微笑值大于阈值。然而,本实施例对此并不限定。在本示例性实施方式中,提供拍照留念的交互过程可以增加交互趣味性,而且利用表情信息自动触发拍照留念,可以进一步给用户提供轻松有趣的体验,提高用户体验。
在一些示例性实施方式中,本实施例的交互方法还可以包括:在交互界面显示拍照控件,响应于目标对象对拍照控件的操作,结合目标对象的实时图像和交互信息生成适于提供给目标对象保存或打印的目标照片。其中,交互信息至少包括:目标对象的身份信息或表情信息。在本示例性实施方式中,提供拍照留念的交互过程可以增加交互趣味性,而且利用拍照控件来触发拍照留念,操作简便。
在一些示例性实施方式中,交互信息还可以包括:服务机构的业务推广 信息。例如,服务机构为金融服务机构,业务推广信息可以包括:金融服务机构推广的金融产品信息、推广联系人的联系方式等。然而,本实施例对此并不限定。
在一些示例性实施方式中,本实施例的交互方法还可以包括:在生成目标照片之后,在交互界面显示目标照片的下载信息。在一些示例中,目标图像的下载信息可以以二维码形式展现。然而,本实施例对此并不限定。
在一些示例性实施方式中,本实施例的交互方法还可以包括:在交互范围内未检测到目标对象时,显示活动推广内容,其中,活动推广内容可以包括以下至少之一:活动推广海报、活动推广视频。然而,本实施例对此并不限定。
在一些示例性实施方式中,在步骤S2中显示的图像采集框可以包括第一图像采集框和第二图像采集框,第二图像采集框位于第一图像采集框内。例如,第一图像采集框可以配置为指示用户上半身所在位置,第二图像采集框可以配置为指示用户头部所在位置。然而,本实施例对此并不限定。
在一些示例性实施方式中,基于采集到的待识别图像,获取目标对象的身份信息,包括:对采集到的待识别图像进行裁切和压缩,得到目标图像;通过数据服务端对目标图像进行识别,获取身份信息。在本示例性实施方式中,数据服务端可以为服务机构的注册用户的数据存储端,通过数据服务端进行身份识别,可以充分利用已注册用户的数据,便于简化系统设计,避免增加开发成本。
图2为本公开至少一实施例的交互系统的示意图。在一些示例性实施方式中,如图2所示,本实施例的交互系统可以包括:交互终端31、管理服务端32和数据服务端33。管理服务端32可以通过无线或有线方式与交互终端31和数据服务端33实现通信。交互终端31配置为在交互范围内检测到目标对象时,显示交互界面,以及响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像。管理服务端32配置为对采集到的待识别图像进行处理,得到目标图像,并将目标图像发送给数据服务端33。数据服务端33配置为对目标图像进行识别,得到识别结果,并将识别结果返回给管理服务端32。管理服务端32配置为在识别结果包括身份 信息时,将身份信息返回给交互终端31。交互终端31还配置为在交互界面显示目标对象的实时图像和身份信息。
在一些示例性实施方式中,交互终端31可以为电子设备,例如,迎宾机器人;管理服务端32和数据服务端33可以为服务器。例如,数据服务端33可以为服务机构的客户关系管理(CRM,Customer Relationship Management)服务器。管理服务端32可以为多个交互终端31的管理服务器。然而,本实施例对此并不限定。
图3为本公开至少一实施例的交互系统的另一示例图。在一些示例性实施方式中,如图3所示,本实施例的交互系统可以包括:交互终端31、管理服务端32、数据服务端33和移动服务端34。管理服务端32可以通过无线或有线方式与交互终端31、数据服务端33以及移动服务端34实现通信。在图2所示的交互系统的基础上,在本示例中,管理服务端32还配置为结合目标对象的实时图像和交互信息生成目标照片,并将目标照片发送给移动服务端34。移动服务端34配置为存储目标照片,并将目标照片的下载信息通过管理服务端32发送给交互终端31。交互终端31还配置为在交互界面显示目标照片的下载信息。本示例中,通过移动服务端34存储目标照片并提供目标照片的下载信息,可以便于用户在移动终端(例如,手机)通过客户端(例如,服务机构提供的用户应用程序(APP,Application))至下载信息指示的下载地址来获取目标照片,如此一来,可以简化交互系统的设计,减小开发成本。
图4为本公开至少一实施例的交互系统的另一示例图。在一些示例性实施方式中,如图4所示,本实施例的交互系统包括:交互终端31、管理服务端32、数据服务端33、移动服务端34和打印设备35。管理服务端32可以通过无线或有线方式与交互终端31、数据服务端33、移动服务端34以及打印设备35实现通信。在图3所示的交互系统的基础上,在一些示例中,用户通过移动终端(例如,手机)的客户端打开目标照片并选择打印目标照片时,移动服务端34将打印指令和照片信息传输给管理服务端32,管理服务端32将打印指令和照片信息传输给打印设备35,以便打印设备35根据照片信息执行打印指令。用户可以从打印设备35拿到相纸版的目标照片。
在一些示例性实施方式中,交互终端31和打印设备35可以放置在金融服务机构的大厅,管理服务端32、数据服务端33和移动服务端34可以为部署在不同地点的服务器,以保证数据安全性。然而,本实施例对此并不限定。
在一些示例性实施方式中,交互终端31可以包括:显示器311、检测处理器312、摄像头313、信息处理器314和输入处理器315。检测处理器312配置为检测交互范围内是否存在目标对象。显示器311配置为在检测处理器312在交互范围检测到目标对象时,显示交互界面。输入处理器315配置为检测目标对象在所述交互界面的操作。显示器311还配置为响应于目标对象在交互界面的操作,在交互界面显示图像采集框。摄像头313配置为采集目标对象的待识别图像。信息处理器314配置为将采集到的待识别图像传输给管理服务端32,并接收管理服务端32返回的身份信息。显示器311还配置为在交互界面显示目标对象的实时图像和身份信息。
在一些示例性实施方式中,显示器311适于提供交互界面。检测处理器312适于检测交互范围内的目标对象。摄像头313适于采集图像。输入处理器315适于检测交互终端上的操作。信息处理器314适于进行信息处理和传输。显示器311、检测处理器312、摄像头313、信息处理器314和输入处理器315可以通过总线连接。
在一些示例性实施方式中,图4中所示的交互终端的结构并不构成对交互终端的限定,交互终端可以包括比图示更多或更少的部件,或者组合某些部件,或者提供不同的部件布置。
在一些示例性实施方式中,信息处理器314可以包括但不限于微处理器(MCU,Microcontroller Unit)或可编程逻辑器件(FPGA,Field Programmable Gate Array)等的处理装置。信息处理器314通过运行存储的软件程序以及模块,从而执行多种功能应用以及数据处理。信息处理器314还可以包括通信电路等通信器件,以便实现与服务器的无线或有线通信。
在一些示例性实施方式中,输入处理器315可以适于接收输入的信息。示例性地,输入处理器315可以包括触控面板(或称为触摸屏)以及其他输入设备(比如,鼠标、键盘、操作杆等)。显示器311可以适于显示用户输入的信息或提供给用户的信息。显示器311可以包括显示面板,比如,液晶 显示器、有机发光二极管显示面板等。示例性地,触控面板可以覆盖在显示面板上,当触控面板检测到在其上或附近的触摸操作后,传输给信息处理器314以确定触摸事件的类型,随后信息处理器314根据触摸事件的类型在显示面板上提供相应的视觉输出。示例性地,触控面板和显示面板可以作为两个独立的部件来实现交互终端的输入和输出功能,或者,触控面板和显示面板可以集成在一起来实现输入和输出功能。然而,本实施例对此并不限定。
在一些示例性实施方式中,检测处理器312可以包括:光学传感器。例如,光学传感器可以为红外传感器。检测处理器312可以通过发射红外线来检测交互范围内是否存在目标对象。然而,本实施例对此并不限定。
在一些示例性实施方式中,摄像头313可以直接获取目标对象的图像,并传输给信息处理器314进行处理。
在一些示例性实施方式中,管理服务端32可以包括:图像接收器321、图像处理器322、图像发送器323和客户信息处理器324。图像接收器321配置为从交互终端31接收待识别图像。图像处理器322配置为对待识别图像进行处理,得到目标图像。图像发送器323配置为将目标图像发送给数据服务端33。客户信息处理器324配置为接收数据服务端33返回的识别结果,从识别结果得到身份信息,并将身份信息发送给交互终端31。然而,本实施例对此并不限定。
在一些示例性实施方式中,图像接收器321和图像发送器323可以包括通信电路等通信器件,以便实现与交互终端和其他服务器的无线或有线通信。图像处理器322可以包括微处理器或可编程逻辑器件等处理装置,从而执行数据处理。客户信息处理器324可以包括微处理器或可编程逻辑器件等处理装置,还可以包括通信电路等通信器件,以便进行数据处理和传输。然而,本实施例对此并不限定。
下面基于图4所示的交互系统对本实施例的交互方法进行举例说明。下面以交互终端设置在银行网点用于迎宾为例进行说明。例如,交互终端可以为银行的迎宾机器人。管理服务端可以为给交互终端提供管理平台的服务器,以便实现对多个交互终端的管理。数据服务端可以为银行的CRM服务器,配置为存储银行的注册用户信息。移动服务端可以提供与用户客户端交互的 服务平台。打印设备可以设置在银行网点,并靠近交互终端,以便于用户拿取打印的相纸版的目标照片。然而,本实施例对此并不限定。
在一些示例性实施方式中,交互终端可以通过检测处理器(例如,红外传感器等光学传感器)实时检测交互范围内是否存在目标对象(即用户)。在一些示例中,交互装置可以利用红外传感器发出红外光来检测目标对象,其中,交互范围可以为红外传感器的检测范围。然而,本实施例对此并不限定。例如,交互装置的检测处理器可以包括声学传感器。
在一些示例性实施方式中,交互终端在交互范围内没有检测到目标对象时,交互终端可以显示活动推广内容。图5为本公开至少一实施例的活动推广内容的示意图。如图5所示,交互终端显示的活动推广内容可以为活动推广海报,活动推广海报可以包括活动宣传信息(例如,“一笑解千愁,**伴左右!”)以及活动参与方式(例如,“请移步至最佳拍照区域,调整姿态,保持微笑”)。活动推广海报还可以包括服务机构的卡通形象等图案。然而,本实施例对此并不限定。在一些示例中,活动推广内容可以包括活动推广视频。交互终端在交互范围内没有检测到目标对象时,可以循环播放活动推广视频。或者,活动推广内容可以包括活动推广海报和活动推广视频。本示例性实施方式中,交互终端通过显示活动推广内容来吸引用户参与交互活动。
在一些示例性实施方式中,交互终端在交互范围内检测到目标对象时,交互装置通过显示器(例如,显示屏幕)显示交互界面。图6为本公开至少一实施例的用户尚未参与交互活动时的交互界面的示意图。在一些示例中,如图6所示,交互界面会显示交互启动控件,例如“开始体验”按钮。在一些示例中,交互界面还可以显示交互活动的欢迎语(例如,“欢迎您体验“互动游戏屏””、以及交互活动的主题名称(例如,“智慧银行体验之旅”)等。然而,本实施例对此并不限定。
在一些示例性实施方式中,响应于用户对交互启动控件的操作(例如,用户在交互界面对交互启动控件的单击或双击操作),交互终端的交互界面会弹出隐私协议浮窗,其中,隐私协议浮窗中可以显示有同意控件。响应于用户对同意控件的操作(例如,对同意控件的单击或双击操作),可以在交互界面显示图像采集框。在本示例中,由于交互过程中涉及到用户图像采集, 通过用户授权之后再进行图像采集,充分尊重了用户隐私。
在一些示例性实施方式中,在用户授权可以进行图像采集之后,交互界面可以显示图像采集框,并通过摄像头采集用户的待识别图像。图7为本公开至少一实施例的交互界面显示的图像采集框的示意图。如图7所示,交互界面显示的图像采集框可以包括:第一图像采集框A1和第二图像采集框A2。第二图像采集框A2位于第一图像采集框A1内。在一些示例中,第一图像采集框A1可以位于交互界面的中间区域,第一图像采集框A1可以为矩形。第二图像采集框A2可以位于第一图像采集框A1的中间区域,第二图像采集框A2例如为椭圆形。第一图像采集框A1可以指示用户上半身的图像采集区域,用户可以通过移动位置使上半身图像进入第一图像采集框A1内。第二图像采集框A2可以指示用户头部的图像采集区域,用户可以通过移动位置使自身头部图像进入第二图像采集框A2内。在本示例中,第一图像采集框A1和第二图像采集框A2可以限定出采集的用户图像范围,以提高后续身份识别的成功概率。在一些示例中,如图7所示,在交互界面还可以显示图像采集的文字提示信息,例如,文字提示信息可以为:“请您调整姿态,进入人脸识别的有效范围”。在一些示例中,可以在交互界面仅显示第一图像采集框。然而,本实施例对此并不限定。
在一些示例性实施方式中,交互终端可以通过摄像头采集用户的待识别图像。交互终端可以将采集到的待识别图像传输至管理服务端,管理服务端对待识别图像进行处理后,得到目标图像,并将目标图像发送给数据服务端,以便由数据服务端基于目标图像进行身份识别。其中,管理服务端对待识别图像的处理可以包括:将待识别图像按照第一图像采集框进行裁切,切除掉第一图像采集框以外的图像内容,并将裁切后的图像进行压缩,得到目标图像。例如,目标图像的大小可以小于或等于1兆字节(M)。然而,本实施例对此并不限定。在一些示例中,交互终端可以对采集到的待识别图像进行处理(例如,包括裁切和压缩),得到目标图像,并将目标图像直接传输至数据服务端,以便由数据服务端基于目标图像进行身份识别。
在一些示例性实施方式中,在数据服务端对目标图像的识别过程中,可以在交互终端的交互界面显示识别进度和提示信息。图8为本公开至少一实 施例的身份识别过程的界面示意图。如图8所示,交互界面显示有提示信息(例如,“脸部识别中”),以及脸部识别的进度条,例如脸部识别的进度以百分比表示。然而,本实施例对此并不限定。
在一些示例性实施方式中,数据服务端存储有用户在服务机构(例如,银行)的注册信息(例如包括:用户的姓名、年龄、性别、职业、图片等基本信息)、以及用户在服务机构的业务信息(例如,包括用户在服务机构的标签、用户在服务机构的财富健康值、财富健康值排行信息等)。然而,本实施例对此并不限定。
在一些示例性实施方式中,数据服务端可以根据已注册用户的数据库,对目标图像进行匹配,以得到识别结果。当数据服务端从已注册用户的数据库中识别到与目标图像匹配的用户,则可以得到包括用户的身份信息的识别结果,并将识别结果传输给管理服务端。在一些示例中,识别结果可以包括:用户的基本信息(例如包括:姓名、年龄、性别、职业等)、用户在服务机构的标签、用户在服务机构的财富健康值、以及财富健康值排行信息。本实施例对于数据服务端采用的脸部识别算法并不限定。
在一些示例性实施方式中,管理服务端接收到数据服务端传输的包括身份信息的识别结果后,可以对识别结果进行存储和整理,并将身份信息传输给交互终端。例如,身份信息可以包括:用户性别、年龄、标签和财富健康值排行信号。然而,本实施例对此并不限定。
在一些示例性实施方式中,当数据服务端在已注册用户的数据库中没有识别到与目标图像匹配的用户,则可以向管理服务端返回不包括身份信息的识别结果。即,识别结果表明目标图像中的目标对象尚未在服务机构注册。当管理服务端接收到数据服务端返回的表明目标对象为非注册用户的识别结果后,可以将该识别结果传输给交互终端,交互终端可以在交互界面显示注册提示信息。例如,交互界面可以显示以下文字“欢迎注册用户,以获得专业财富健康诊断”。在本示例中,响应于用户在交互界面的操作,可以进一步引导用户进行注册,以成为注册用户再参与交互活动。然而,本实施例对此并不限定。
在一些示例性实施方式中,交互终端在采集到待识别图像后,可以基于 待识别图像进行表情识别和评估,得到目标对象的表情信息。例如,表情信息可以包括:表情类型以及表情评分结果。在一些示例中,可以识别出微笑表情以及对应的微笑值。本实施例对于交互终端所采用的表情识别算法并不限定。
在一些示例性实施方式中,交互终端在得到目标对象的身份信息和表情信息之后,可以在交互界面显示目标对象的实时图像、身份信息和表情信息。图9为本公开至少一实施例的交互界面的实时图像、身份信息和表情信息的显示示意图。在一些示例中,如图9所示,交互界面可以具有第一显示区101和第三显示区103。第三显示区103可以位于第一显示区101的一侧(例如,左侧)。第一显示区101内悬浮设置有第二显示区102,第二显示区102位于第一显示区101的下半区域内。第一显示区101可以显示用户的实时图像,第二显示区102可以显示用户的身份信息和表情信息,第三显示区103可以显示交互活动的参与情况(例如包括:参与人数、参与人数的男女比例、以及年龄分布情况)。在交互界面还可以显示拍照控件201和退出控件202。拍照控件201和退出控件202可以悬浮显示在第一显示区101,且位于第二显示区102的相对两侧。例如,拍照控件201位于第二显示区102的左侧,退出控件202位于第二显示区102的右侧。然而,本实施例对此并不限定。
在一些示例性实施方式中,如图9所示,第二显示区102可以显示用户的年龄(例如,27)、性别(例如,男)、标签(例如,人生赢家)、财富健康值排行信息(例如,您的财富健康值击败了我行全国80%的人)以及微笑值(例如,88)。在一些示例中,用户标签和财富健康值排行信息沿第一方向(例如,竖直方向)排布,性别和年龄沿第一方向排布,微笑值在第二方向(例如,水平方向)上位于标签和性别之间。其中,第一方向垂直于第二方向。然而,本实施例对于第二显示区102内的显示信息的排布位置并不限定。在本示例中,通过在交互界面显示财富健康值排行信息,可以吸引用户转发本次交互结果,进而吸引更多用户参与交互。而且,可以激发用户提高排行的兴趣,进而吸引用户进行资产配置优化。在本示例中,通过显示微笑值,可以增加趣味性。
在一些示例性实施方式中,第一显示区101的实时图像是交互终端利用 摄像头实时得到的。交互终端可以根据实时得到的实时图像,更新第二显示区102显示的微笑值。在本示例中,交互终端通过实时检测和识别用户表情,可以使得第二显示区102的微笑值是动态变化的,从而可以引导用户进行微笑,调整用户心情,增加交互趣味性。在一些示例中,为提升趣味性,可以在第三显示区103显示微笑值排行榜,微笑值排行榜内可以显示微笑值排行为前N名(例如,N为3)的用户头像和微笑值。然而,本实施例对此并不限定。
在一些示例性实施方式中,响应于用户对拍照控件201的操作(例如,单击或双击操作),交互终端可以采集用户的实时图像,并结合实时图像和交互信息生成留念照片。交互信息可以包括:用户的身份信息和表情信息。交互终端将采集的实时图像和交互信息结合后生成留念照片,并传输给管理服务端,管理服务端对留念照片进行处理,得到目标照片。目标照片可以包括:用户的实时图像、交互信息。交互信息可以包括用户的身份信息、表情信息以及业务推广信息。图10为本公开至少一实施例的目标照片的示意图。在一些示例中,如图10所示,目标照片可以包括:图片显示区105和信息显示区104。图片显示区105具有用户的实时图像、身份信息和表情信息,信息显示区104具有业务推广信息。例如,业务推广信息可以包括:银行宣传话术、宣传产品、理财经理二维码、理财经理的名字、银行智能助理虚拟形象等内容。然而,本实施例对此并不限定。
在一些示例性实施方式中,管理服务端处理得到目标照片后,将目标照片发送给移动服务端。移动服务端将目标照片下载并保存后,生成下载信息并返回给管理服务端。管理服务端将下载信息发送给交互终端。交互终端可以在交互界面显示下载信息。图11为本公开至少一实施例的下载信息的显示示意图。在一些示例中,如图11所示,交互界面包括预览显示区和下载信息显示区。预览显示区显示留念照片。下载信息显示区显示下载信息。下载信息以二维码的形式展现,且在下载信息附近显示有文字提示信息(例如,“请使用微信/手机银行扫描二维码下载并保存图片”)。在交互界面还显示有打印提示信息(例如,“请您移步至服务区,使用照片打印机打印照片哦”)。在一些示例中,用户可以通过移动终端(例如,手机)中的银行客户端扫描 交互界面显示的二维码以打开网页链接,获取到目标照片,然后可以选择下载或打印目标照片。当用户选择下载指令时,可以将目标照片保存到用户的移动终端的相册里;当用户选择打印指令时,移动服务端可以将打印指令和照片信息传输给管理服务端,管理服务端可以将打印指令和照片信息传输给打印设备,由打印设备执行打印指令,得到相纸版的目标照片。用户可以到放置在银行服务区的打印设备拿去相纸版的目标照片。
在一些示例性实施方式中,移动服务端提供的下载信息可以为单次有效,以保护用户隐私。例如,交互界面显示的二维码为单次扫码有效。用户通过移动终端对交互终端显示的二维码进行一次扫码后,移动服务端可以将交互终端显示的二维码置于失效状态。在一些示例中,为保护用户隐私,用户通过移动终端对二维码进行扫码后,移动服务端可以给用户绑定的手机号码发送验证码,用户需要在移动终端上输入验证码才能下载目标照片。然而,本实施例对此并不限定。
在一些示例性实施方式中,如图9所示,响应于用户对退出控件202的操作(例如,单击或双击操作),可以退出交互流程。或者,如图9所示,交互终端在设定时长内未检测到用户的操作,可以退出交互流程。如图11所示,交互终端在设定时长内未检测到用户的操作,也可以退出交互流程。然而,本实施例对此并不限定。
在本示例性实施方式中,将对象识别和体验游戏结合,可以带给用户轻松有趣的体验,在提高用户体验的同时,可以提升服务机构的宣传效果。例如,通过在交互界面显示财富健康值排行,可以吸引用户转发,进而吸引更多参与者,而且可以激发用户提高排行的兴趣,以引导用户进行资产配置优化。通过显示微笑值的动态变化,可以引导用户进行微笑,调整用户心情,并增加趣味性。
在一些示例性实施方式中,交互系统可以实现与多人同时交互。在一些示例中,交互终端在交互范围内检测到至少一个目标对象时,显示交互界面。响应于目标对象在交互界面的操作,在交互界面可以显示至少两个图像采集框,并采集至少两个目标对象的待识别图像。例如,交互终端可以依次对两个目标对象进行图像采集和身份识别。比如,交互终端显示图像采集框并采 集第一个目标对象的图像信息后,交互界面可以显示新增识别对象控件和交互控件;响应于目标对象对新增识别对象控件的操作,交互界面可以再次显示图像采集框并采集第二个目标对象的图像信息。响应于目标对象对交互控件的操作,交互界面可以显示目标对象的实时图像。在另一些示例中,交互终端可以同时对两个目标对象进行图像采集和身份识别。例如,交互界面可以同时显示两个图像采集框,同步完成针对两个目标对象的图像采集和身份识别。然而,本实施例对此并不限定。
在一些示例性实施方式中,交互终端在获取至少两个目标对象的身份信息和表情信息后,可以在交互界面显示至少两个目标对象的实时图像、身份信息和表情信息。图12为本公开至少一实施例的多人交互的界面示意图。在一些示例中,如图12所示,交互界面具有合影区107、综合信息显示区106和微笑进度条108。合影区107显示有两个目标对象的实时图像、身份信息和表情信息。其中,每个目标对象的身份信息可以包括:性别、年龄和标签,表情信息可以包括:微笑值。综合信息显示区106可以包括:交互活动的参与人数(例如,83719)、参与人数中的男女比例、参与人数中的年龄分布情况以及微笑值排行榜。微笑值排行榜显示有微笑值排行前3名的用户头像和对应的微笑值。综合信息显示区106位于合影区107的一侧(例如,左侧)。微笑进度条108可以悬浮显示在合影区107,且位于远离综合信息显示区106的一侧。微笑进度条108可以指示单个目标对象的微笑值,或者,可以指示多个目标对象的总微笑值。然而,本实施例对此并不限定。
在一些示例性实施方式中,当微笑进度条108指示的单个目标对象微笑值达到阈值时,可以触发自动拍照留念。例如,当微笑进度条108指示的单个目标对象微笑值达到微笑拍照阈值(例如,88)时,交互终端的交互界面可以进入如图13所示的拍照状态。如图13所示,交互界面提示准备拍照,并显示两个目标对象的合影、以及显示两个目标对象各自的身份信息(例如,性别、年龄和标签)和表情信息(例如,微笑值)。在一些示例中,在多人交互过程中,可以在交互界面显示服务机构的卡通形象,以提升交互的趣味性。然而,本实施例对此并不限定。在另一些示例中,微笑进度条可以指示多个目标对象的总微笑值,并在总微笑值达到阈值时,触发自动拍照留念。
在一些示例性实施方式中,如图12和图13中,目标对象的身份信息和表情信息可以悬浮显示在目标对象的图像的附近,以指示身份信息和表情信息归属于哪个目标对象。例如,目标对象的性别、年龄和微笑值可以悬浮显示在目标对象的头部上方,标签可以悬浮显示在目标对象的头部下方。然而,本实施例对此并不限定。
在一些示例性实施方式中,在图12和图13中可以显示服务机构的卡通形象,以增加交互趣味性。其中,服务机构的卡通形象可以具有多种姿态,例如,拍照姿态、看书姿态、潜水姿态等。本实施例对此并不限定。
在本示例性实施方式中,通过将多个对象的识别和体验游戏结合,可以进一步提高人机交互的趣味性。
在图9至图13中,为了保护用户隐私,对用户图像进行了遮挡处理,仅采用椭圆形框大致示意了用户头部位置。在图5至图13中,对服务机构的卡通形象进行了遮挡处理,仅采用灰色矩形框大致示意了卡通形象的位置。在图5至图13中,对二维码等信息采用黑底白点矩形框进行了遮挡。
在一些示例性实施方式中,在交互终端与用户的交互过程中,管理服务端可以从交互终端采集用户的体验行为信息(例如,用户体验次数、体验时长等),并对用户的体验行为信息进行统计分析,以得到该交互活动的受欢迎程度,提供给服务机构的管理人员作为决策参考。然而,本实施例对此并不限定。
图14为本公开至少一实施例的交互系统的示意图。在一些示例性实施方式中,如图14所示,本实施例的交互系统可以包括:交互终端41、数据服务端42和移动服务端43。在本示例中,交互终端41可以通过有线或无线方式与数据服务端42和移动服务端43实现通信。交互终端41配置为在交互范围内检测到目标对象时,显示交互界面,响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像。交互终端41还配置为对采集到的待识别图像进行处理,得到目标图像,并将目标图像发送给数据服务端42。数据服务端42配置为对目标图像进行识别,得到识别结果,并将识别结果返回给交互终端41。交互终端41配置为在识别结果包括身份信息时,在交互界面显示目标对象的实时图像和身份信息。
在一些示例性实施方式中,交互终端41还配置为结合目标对象的实时图像和交互信息生成目标照片,并将目标照片发送给移动服务端43。移动服务端43配置为存储目标照片,并将目标照片的下载信息传输给交互终端41。
在本示例性实施方式中,交互终端41直接与数据服务端42和移动服务端43进行通信。在一些示例中,交互终端41可以包括:检测处理器、显示器、输入处理器、摄像头以及信息处理器。检测处理器,配置为检测交互范围内是否存在目标对象。显示器,配置为检测处理器在交互范围检测到目标对象时,显示交互界面。输入处理器,配置为输入目标对象在交互界面的操作。显示器,还配置为响应于目标对象在交互界面的操作,在交互界面显示图像采集框。摄像头配置为采集目标对象的待识别图像。信息处理器配置为对采集到的待识别图像进行处理,得到目标图像,将目标图像传输给数据服务端,并接收数据服务端返回的识别结果。信息处理器还配置为根据采集的图像,获取表情信息。
关于本实施例的交互系统的其余说明可以参照前述实施例的相关说明,故于此不再赘述。
本公开至少一实施例还提供一种电子设备,包括:存储器和处理器。存储器适于存储计算机程序,所述计算机程序被处理器执行时实现上述交互方法的步骤。
图15为本公开至少一实施例的电子设备的一种示意图。在一些示例性实施方式中,如图15所示,本实施例的电子设备包括:处理器501和存储器502。处理器501和存储器502可以通过总线连接。存储器502适于存储计算机程序,计算机程序被处理器501执行时实现上述实施例提供的交互方法的步骤。
在一些示例性实施方式中,处理器501可以包括MCU或FPGA等的处理装置。存储器502可以存储应用软件的软件程序以及模块,如本实施例中的交互方法对应的程序指令或模块。处理器501通过运行存储在存储器502内的软件程序以及模块,从而执行多种功能应用以及数据处理,比如实现本实施例提供的交互方法。存储器502可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态 存储器。在一些示例中,存储器502可包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
此外,本公开至少一实施例还提供一种非瞬态计算机可读存储介质,存储有计算机程序,该计算机程序被执行时实现上述交互方法的步骤。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块或单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块或单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上显示和描述了本公开的基本原理、主要特征和本公开的优点。本公开不受上述实施例的限制,上述实施例和说明书中描述的只是说明本公开的原理,在不脱离本公开精神和范围的前提下,本公开还会有多种变化和改进,这些变化和改进都落入要求保护的本公开的范围内。

Claims (19)

  1. 一种交互方法,包括:
    在交互范围内检测到目标对象时,显示交互界面;
    响应于所述目标对象在所述交互界面的操作,在所述交互界面显示图像采集框,并采集所述目标对象的待识别图像;
    基于采集到的待识别图像,获取所述目标对象的身份信息;
    在所述交互界面显示所述目标对象的实时图像和身份信息。
  2. 根据权利要求1所述的方法,其中,所述身份信息至少包括:所述目标对象在服务机构的信息。
  3. 根据权利要求2所述的方法,其中,所述服务机构为金融服务机构,所述目标对象在服务机构的信息包括以下至少之一:所述目标对象在所述金融服务机构的财富体检结果、所述目标对象在所述金融服务机构的标签。
  4. 根据权利要求1至3中任一项所述的方法,还包括:基于采集到的待识别图像,获取所述目标对象的表情信息,并在所述交互界面显示所述目标对象的表情信息;其中,所述表情信息包括:表情类型或表情评分结果。
  5. 根据权利要求4所述的方法,还包括:根据所述目标对象的实时图像,更新所述交互界面上显示的表情信息。
  6. 根据权利要求4或5所述的方法,还包括:在所述目标对象的表情信息满足设定条件时,结合所述目标对象的实时图像和交互信息生成适于提供给所述目标对象保存或打印的目标照片;所述交互信息至少包括:所述目标对象的身份信息或表情信息。
  7. 根据权利要求4或5所述的方法,还包括:
    在所述交互界面显示拍照控件;
    响应于所述目标对象对拍照控件的操作,结合所述目标对象的实时图像和交互信息生成适于提供给所述目标对象保存或打印的目标照片;所述交互信息至少包括:所述目标对象的身份信息或表情信息。
  8. 根据权利要求6或7所述的方法,其中,所述交互信息还包括:服务 机构的业务推广信息。
  9. 根据权利要求6至8中任一项所述的方法,还包括:在生成所述目标照片之后,在所述交互界面显示所述目标照片的下载信息。
  10. 根据权利要求1所述的方法,还包括:在交互范围内未检测到目标对象时,显示活动推广内容。
  11. 根据权利要求1至10中任一项所述的方法,其中,所述图像采集框包括第一图像采集框和第二图像采集框,所述第二图像采集框位于第一图像采集框内。
  12. 根据权利要求1至11中任一项所述的方法,其中,所述基于采集到的待识别图像,获取所述目标对象的身份信息,包括:
    对采集到的待识别图像进行裁切和压缩,得到目标图像;
    通过数据服务端对所述目标图像进行识别,获取身份信息。
  13. 一种交互系统,包括:交互终端;
    所述交互终端配置为在交互范围内检测到目标对象时,显示交互界面,响应于目标对象在交互界面的操作,在交互界面显示图像采集框,并采集目标对象的待识别图像;基于采集到的待识别图像,获取目标对象的身份信息;以及在所述交互界面显示所述目标对象的实时图像和身份信息。
  14. 根据权利要求13所述的交互系统,还包括:管理服务端和数据服务端;
    所述管理服务端配置为对采集到的待识别图像进行处理,得到目标图像,并将所述目标图像发送给所述数据服务端;
    所述数据服务端配置为对所述目标图像进行识别,得到识别结果,并将所述识别结果返回给所述管理服务端;
    所述管理服务端还配置为在所述识别结果包括身份信息时,将所述身份信息返回给所述交互终端。
  15. 根据权利要求14所述的交互系统,还包括:移动服务端;
    所述管理服务端还配置为结合所述目标对象的实时图像和交互信息生 成目标照片,并将所述目标照片发送给所述移动服务端;
    所述移动服务端配置为存储所述目标照片,并将所述目标照片的下载信息通过所述管理服务端发送给所述交互终端;
    所述交互终端还配置为在所述交互界面显示所述目标照片的下载信息。
  16. 根据权利要求14至15中任一项所述的交互系统,其中,所述交互终端包括:检测处理器、显示器、输入处理器、摄像头以及信息处理器;
    所述检测处理器,配置为检测交互范围内是否存在目标对象;
    所述显示器,配置为在所述检测处理器在交互范围检测到目标对象时,显示交互界面;
    所述输入处理器,配置为检测所述目标对象在所述交互界面的操作;
    所述显示器,还配置为响应于所述目标对象在交互界面的操作,在所述交互界面显示图像采集框;
    所述摄像头,配置为采集目标对象的待识别图像;
    所述信息处理器,配置为将采集到的待识别图像传输给管理服务端,并接收管理服务端返回的身份信息;
    所述显示器还配置为在所述交互界面显示所述目标对象的实时图像和身份信息。
  17. 根据权利要求14至16中任一项所述的交互系统,其中,所述管理服务端,包括:图像接收器、图像处理器、图像发送器以及客户信息处理器;
    所述图像接收器,配置为从所述交互终端接收待识别图像;
    所述图像处理器,配置为对所述待识别图像进行处理,得到目标图像;
    所述图像发送器,配置为将所述目标图像发送给所述数据服务端;
    所述客户信息处理器,配置为接收所述数据服务端返回的识别结果,从所述识别结果得到身份信息,并将所述身份信息发送给所述交互终端。
  18. 一种电子设备,包括:存储器和处理器,所述存储器适于存储计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至12中任一项所述的交互方法的步骤。
  19. 一种非瞬态计算机可读存储介质,存储有计算机程序,所述计算机程序被执行时实现如权利要求1至12中任一项所述的交互方法的步骤。
PCT/CN2021/099176 2021-06-09 2021-06-09 交互方法、交互系统及电子设备 WO2022257044A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/099176 WO2022257044A1 (zh) 2021-06-09 2021-06-09 交互方法、交互系统及电子设备
CN202180001499.6A CN115735190A (zh) 2021-06-09 2021-06-09 交互方法、交互系统及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/099176 WO2022257044A1 (zh) 2021-06-09 2021-06-09 交互方法、交互系统及电子设备

Publications (1)

Publication Number Publication Date
WO2022257044A1 true WO2022257044A1 (zh) 2022-12-15

Family

ID=84424704

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/099176 WO2022257044A1 (zh) 2021-06-09 2021-06-09 交互方法、交互系统及电子设备

Country Status (2)

Country Link
CN (1) CN115735190A (zh)
WO (1) WO2022257044A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523668A (zh) * 2018-11-16 2019-03-26 深圳前海达闼云端智能科技有限公司 一种智能门禁控制的方法、装置及计算设备
CN110442294A (zh) * 2019-07-10 2019-11-12 杭州鸿雁智能科技有限公司 操作面板的界面显示方法、装置、系统和存储介质
CN111639534A (zh) * 2020-04-28 2020-09-08 深圳壹账通智能科技有限公司 基于人脸识别的信息生成方法、装置及计算机设备
CN111666780A (zh) * 2019-03-05 2020-09-15 北京入思技术有限公司 一种基于情绪识别技术的智能门控安防方法
CN112101216A (zh) * 2020-09-15 2020-12-18 百度在线网络技术(北京)有限公司 人脸识别方法、装置、设备及存储介质
CN112562221A (zh) * 2020-12-02 2021-03-26 支付宝(杭州)信息技术有限公司 一种支持人脸识别的终端以及方法
CN112669507A (zh) * 2019-12-17 2021-04-16 上海云思智慧信息技术有限公司 迎宾方式的选择方法、系统、介质及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523668A (zh) * 2018-11-16 2019-03-26 深圳前海达闼云端智能科技有限公司 一种智能门禁控制的方法、装置及计算设备
CN111666780A (zh) * 2019-03-05 2020-09-15 北京入思技术有限公司 一种基于情绪识别技术的智能门控安防方法
CN110442294A (zh) * 2019-07-10 2019-11-12 杭州鸿雁智能科技有限公司 操作面板的界面显示方法、装置、系统和存储介质
CN112669507A (zh) * 2019-12-17 2021-04-16 上海云思智慧信息技术有限公司 迎宾方式的选择方法、系统、介质及设备
CN111639534A (zh) * 2020-04-28 2020-09-08 深圳壹账通智能科技有限公司 基于人脸识别的信息生成方法、装置及计算机设备
CN112101216A (zh) * 2020-09-15 2020-12-18 百度在线网络技术(北京)有限公司 人脸识别方法、装置、设备及存储介质
CN112562221A (zh) * 2020-12-02 2021-03-26 支付宝(杭州)信息技术有限公司 一种支持人脸识别的终端以及方法

Also Published As

Publication number Publication date
CN115735190A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
US20200219295A1 (en) Emoji manipulation using machine learning
US20190005359A1 (en) Method and system for predicting personality traits, capabilities and suggested interactions from images of a person
CN108234591B (zh) 基于身份验证装置的内容数据推荐方法、装置和存储介质
CN108399665A (zh) 基于人脸识别的安全监控方法、装置及存储介质
US20170098122A1 (en) Analysis of image content with associated manipulation of expression presentation
CN106940692A (zh) 以对话方式引导与电子表单的交互的交互式电子表单工作流助手
CN112036331B (zh) 活体检测模型的训练方法、装置、设备及存储介质
Krishna et al. Socially situated artificial intelligence enables learning from human interaction
CN106257396A (zh) 用于管理协作环境的计算机实现方法、系统和设备
CN101681228A (zh) 生物测定数据采集系统
WO2012172625A1 (ja) 美容snsシステム及びプログラム
US11430561B2 (en) Remote computing analysis for cognitive state data metrics
CN109923851A (zh) 打印系统、服务器、打印方法及程序
CN112150349A (zh) 一种图像处理方法、装置、计算机设备及存储介质
CN109670385A (zh) 一种应用程序中表情更新的方法及装置
CN108734003A (zh) 身份验证方法、装置、设备、存储介质及程序
WO2022257044A1 (zh) 交互方法、交互系统及电子设备
CN114898395A (zh) 交互方法、装置、设备、存储介质及程序产品
US20230394878A1 (en) Program, information processing device, and method
US11659273B2 (en) Information processing apparatus, information processing method, and non-transitory storage medium
JP2014191602A (ja) 表示装置、プログラム及び表示システム
CN113538703A (zh) 数据展示方法、装置、计算机设备及存储介质
US20130257743A1 (en) Insect Repelling Digital Photo Frame with Biometrics
CN210222833U (zh) 一种人事管理系统提示任务的考勤机
CN110597379A (zh) 自动匹配乘员的电梯广告投放系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21944554

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE