CN111415185B - Service processing method, device, terminal and storage medium - Google Patents

Service processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111415185B
CN111415185B CN201910016186.2A CN201910016186A CN111415185B CN 111415185 B CN111415185 B CN 111415185B CN 201910016186 A CN201910016186 A CN 201910016186A CN 111415185 B CN111415185 B CN 111415185B
Authority
CN
China
Prior art keywords
image
information
terminal
user
target product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910016186.2A
Other languages
Chinese (zh)
Other versions
CN111415185A (en
Inventor
李胤恺
耿志军
郭润增
黄家宇
刘文君
吕中方
周航
孔维伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910016186.2A priority Critical patent/CN111415185B/en
Publication of CN111415185A publication Critical patent/CN111415185A/en
Application granted granted Critical
Publication of CN111415185B publication Critical patent/CN111415185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a service processing method, a device, a terminal and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: when the target range is detected to contain a person, the collected face image is identified, and user information corresponding to the face image is obtained; acquiring a first image; acquiring target product information from a plurality of candidate product information corresponding to the user information; displaying a second image for showing the effect of the person in the first image applying the target product according to the first image and the target product information; when a value transfer instruction is acquired, a value transfer request is sent. The invention determines the user information in a face recognition mode, recommends the product based on the user information, provides a virtual trial function of the product, does not need painting or wearing entity products, integrally realizes the links of selection, trial and numerical value transfer, does not need manual participation, reduces labor cost and effectively improves the efficiency of the service processing method.

Description

Service processing method, device, terminal and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a service processing method, a device, a terminal, and a storage medium.
Background
With the development of computer technology, people can perform various services through the computer technology so as to improve the working efficiency. Wherein, assisting the user to select, try out and buy the product is also a business.
Currently, the business processing method is that a user uses the product on his own body, a salesperson is required to assist in selecting the product, and the user also needs to carry the product to a counter for value transfer when purchasing the product.
In the service processing method, the user needs to paint or wear the physical product in person, which is unhygienic and takes long time, if the user wants to select various products, the user needs to paint or wear for many times, the selecting process is inconvenient, and the selecting, trial and value transferring are all disjointed, each link is time-consuming, and more labor cost is needed, so the efficiency of the service processing method is lower.
Disclosure of Invention
The embodiment of the invention provides a service processing method, a device, a terminal and a storage medium, which can solve the problem of lower efficiency in the related technology. The technical scheme is as follows:
in one aspect, a service processing method is provided, and the method includes:
when the fact that a person is included in the target range is detected, the collected face image is recognized, and user information corresponding to the face image is obtained;
Acquiring a first image;
Obtaining target product information from a plurality of candidate product information corresponding to the user information;
Displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for reflecting the effect that the person in the first image applies the target product corresponding to the target product information;
And when a numerical value transfer instruction corresponding to the target product information is acquired, sending a numerical value transfer request to a server, wherein the numerical value transfer request is used for indicating the server to execute business processing corresponding to the numerical value transfer request.
In one aspect, there is provided a service processing apparatus, the apparatus comprising:
The acquisition module is used for identifying the acquired face image when the target range comprises a person, and acquiring user information corresponding to the face image;
the acquisition module is also used for acquiring a first image;
The acquisition module is further used for acquiring target product information from a plurality of candidate product information corresponding to the user information;
The display module is used for displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for reflecting the effect that the person in the first image applies the target product corresponding to the target product information;
And the sending module is used for sending a value transfer request to a server when a value transfer instruction corresponding to the target product information is acquired, wherein the value transfer request is used for indicating the server to execute business processing corresponding to the value transfer request.
In one aspect, a terminal is provided that includes a processor and a memory having at least one instruction stored therein, the instruction being loaded and executed by the processor to implement operations performed by the business processing method.
In one aspect, a computer readable storage medium having at least one instruction stored therein is loaded and executed by a processor to implement operations performed by the business processing method.
According to the embodiment of the invention, when the person is detected, the person is identified, the corresponding user information is determined, candidate product information can be recommended based on the user information, and then the image can be acquired, so that the image after the person in the image applies the target product can be displayed by combining the target product information on the basis of the image, and virtual trial is realized. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, user information can be determined in the process in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is also provided, the entity product is not needed to be painted or worn, the selecting, trial and value transfer links can be integrally realized, manual participation is not needed, labor cost is reduced, and meanwhile, the efficiency of the service processing method can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an implementation environment of a service processing method provided in an embodiment of the present invention;
Fig. 2 is a flowchart of a service processing method provided in an embodiment of the present invention;
Fig. 3 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 4 is a flowchart of a service processing method according to an embodiment of the present invention;
Fig. 5 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a service processing device according to an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, the information (including but not limited to face images, user information, first images, product information, product images, numerical transfer account, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals (including but not limited to signals transmitted between user terminals and other devices, etc.) related to the present application are fully authorized by the user or related aspects, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant country and region.
Fig. 1 is an implementation environment of a service processing method provided in an embodiment of the present invention, and referring to fig. 1, the implementation environment may include a terminal 101 and a server 102. Wherein the terminal 101 and the server 102 may be connected through a network to implement data interaction, and the terminal 101 may perform corresponding steps by the server 102 based on the network request by sending the network request to the server 102 to provide corresponding services.
In the embodiment of the present invention, the terminal 101 may provide a face recognition function, a virtual trial function of a product, and a value transfer function, and when a value transfer operation is detected, the terminal 101 may send a value transfer request of the product to the server 102, and the server 102 performs corresponding business processing based on the value transfer request, thereby providing a value transfer service for the terminal 101.
It should be noted that, the terminal 101 may access the server 102 through a client installed on the terminal 101, or may access the server 102 through a web portal, which is not limited in the embodiment of the present invention.
Fig. 2 is a flowchart of a service processing method according to an embodiment of the present invention, where the service processing method is applied to a terminal, and the terminal may be the terminal 101 in the implementation environment shown in fig. 1. Referring to fig. 2, the service processing method may include the steps of:
201. When the target range is detected to contain a person, the terminal identifies the acquired face image and acquires user information corresponding to the face image.
In the embodiment of the invention, the terminal can have an image acquisition function and a face recognition function, the terminal can acquire an image, identify the acquired image, determine whether a person exists in a target range in the image, and start corresponding functions when the person exists, wherein the target range can be all areas or partial areas in the image. When the terminal determines that the target range in the image comprises a person, it indicates that the user may want to use the service processing service provided by the terminal, the terminal may further collect a face image and identify the collected face image, or may perform face recognition based on the image collected previously and determine the identity of the user corresponding to the face, so as to provide the user with the corresponding service processing service.
Specifically, the terminal may be mounted with an image capturing section through which image capturing is performed. For example, the terminal may be a make-up mirror or a fitting mirror, and the make-up mirror or the fitting mirror may be provided with a camera, so that the make-up mirror or the fitting mirror may implement the image acquisition function and the face recognition function through the camera, and of course, the make-up mirror or the fitting mirror may also have the following virtual test and numerical transfer functions. The product provided by the terminal may include a cosmetic product or clothing, which is not limited in the embodiment of the present invention.
In one possible implementation manner, the target range may refer to an image acquisition range of the terminal, that is, the terminal may perform image acquisition on the target range, and when the user needs to use the service provided by the terminal, the user may move his or her own position so that the body is within the image acquisition range of the terminal, or, of course, may also be so that the target part of the body is within the image acquisition range of the terminal, for example, the face. The terminal can collect images of the target range, and when the collected images are determined to include people, namely when the situation that the target range includes people is detected, the terminal can execute the identification step of the user.
In another possible implementation, the image acquisition range of the terminal may include the target range, that is, the target range may be a partial region in the image acquisition range. Accordingly, when a user needs to use the business processing service provided by the terminal, the user needs to maintain a certain posture so that the body or the target part of the body is within a certain range of the image acquired by the terminal. In step 201, the terminal may perform image acquisition on the image acquisition range, detect whether a person is included in the target range in the acquired image, and if so, that is, detect that the person is included in the target range, the terminal may perform the step of identifying the user.
It should be noted that, the above description provides two implementations of the detection target range including the person, and the embodiment of the present invention does not limit the specific definition of the target range. In one possible implementation, the terminal may be provided with a detection period, and the terminal may periodically perform image acquisition and detect whether the acquired image includes a person. Of course, the terminal may also perform image acquisition in real time, which is not limited in the embodiment of the present invention, and if an implementation manner of a detection period is adopted, the value of the detection period is not limited in the embodiment of the present invention. When the terminal determines whether the acquired image includes a person, any target detection algorithm may be adopted, which is not limited in the embodiment of the present invention.
When the terminal determines that the target range includes a person, the identity of the person can be further determined, and the terminal can be realized in a face recognition mode. The terminal can collect face images, identify the collected face images, and also can identify the face of the image collected when the previous detection target range comprises a person, and if the collected image comprises a face, the identity of the user can be identified. Of course, if the previously acquired image does not include a face or the face recognition fails, the terminal may further perform the step of acquiring the face image and identifying the acquired face image.
Specifically, when the terminal performs face recognition, the collected face image can be compared with the face image in the stored information, and when the recognition results are different, the terminal can execute different steps. Specifically, the recognition result may include two cases, in which the steps that the terminal needs to execute may also be as follows:
In the first case, when the acquired face image matches the face image in the stored information, that is, when it is determined that the stored information includes the user information corresponding to the face image according to the recognition result, the terminal may acquire the user information corresponding to the face image from the stored information.
User information can be stored in the terminal, and the terminal can recognize the face image so as to determine the identity of the user in the face image and acquire the stored user information. For example, the user may have previously registered an account with the terminal or an account with an application installed on the terminal, and user information for the account may be stored during registration. The user is a member, and the terminal identifies the user identity in step 201, and determines whether the user is a member. For example, a member system may be installed on the terminal, and when the terminal determines that the user is a member, the terminal may acquire corresponding member information (user information) from the member system.
Of course, the stored information may also be information on other terminals or servers, where the terminal may access other terminals or access servers to obtain the user information corresponding to the face image. The embodiment of the invention is not limited to which implementation mode is adopted specifically.
In the second case, when the acquired face image fails to match any face image in the stored information, that is, when it is determined that the stored information does not include the user information corresponding to the face image according to the recognition result, the terminal may set an interface according to the user information, and acquire the user information corresponding to the face image.
The user may not have previously registered an account on the terminal or on a related application installed on the terminal, and the stored information may not have user information corresponding to the face image, and the terminal may display a user information setting interface, in which a plurality of input items or options may be included, which are input or selected by the user, to set user information. For example, if the user is not a member, the terminal may display a registration interface, and the user inputs user information required for registering an account, thereby acquiring user information corresponding to the face image. Of course, the terminal may correspondingly store the face image and the user information, or send the face image and the user information to a corresponding other terminal or server storing the user information, for example, if implemented by a member system, the terminal may update the user information of the newly registered member to the member system.
Through the face recognition process, the terminal can determine the identity of the user through the mode of collecting the face image without manually inputting an account number and an account number password or using other terminals to perform identity verification, so that user operation can be effectively reduced, complexity of the user operation is reduced, and the acquisition efficiency of user information is improved.
In one possible implementation manner, after the terminal obtains the user information corresponding to the face image, the user information may also be displayed in the current interface. The user information may include identification information of the user, where the identification information may be a name of the user, and of course, the user information may also include other information of the user, for example, an avatar of the user, which is not limited in the embodiment of the present invention.
202. The terminal acquires a first image.
In the embodiment of the invention, the terminal can also provide a virtual trial function, the terminal can acquire the first image, the first image comprises a person to be subjected to virtual trial, and the effect of the person after the product is applied is simulated in the first image, so that a user does not need to paint or wear an entity product to watch the trial effect of the product. In one possible implementation manner, the terminal may perform image acquisition on an image acquisition range of the terminal through an image acquisition component to obtain the first image. Of course, after the terminal acquires the first image, the first image may also be displayed.
Specifically, the terminal may acquire the first image in real time, or may acquire the first image at intervals of an acquisition period, or may acquire the first image after the step 201, and perform a subsequent virtual trial process based on the first image.
For example, taking the terminal as a product trial terminal, for example, the product trial terminal can be a cosmetic mirror with a virtual trial function, and of course, the product trial terminal can also have a virtual trial function of other products, for example, can be a fitting mirror with a virtual trial function, and of course, can also provide other trial functions, for example, ornaments trial and the like. The product can be a product which needs to be selected and tried out when being purchased such as a cosmetic product or clothes. The user can walk in front of the product trial terminal, and when the product trial terminal detects that a person is included in the visual field range of the user or when a person is included in a certain area in the visual field range of the user, the user can activate the face recognition function, recognize the identity of the user, and acquire and display the first image in real time.
It should be noted that, in the foregoing steps 201 and 202, the terminal may execute step 201 first, and then execute step 202, that is, after the terminal determines the identity of the user, the terminal provides a virtual trial function for the user, or may execute the steps 201 and 202 simultaneously when detecting that the target range includes a person, or may execute step 202 when detecting that the target range includes a person, and then execute step 201 when executing step 202, and the execution sequence of the steps 201 and 202 is not limited specifically by the terminal.
203. And the terminal acquires a plurality of candidate product information corresponding to the user information based on the user information.
After the terminal obtains the user information, the corresponding product can be recommended to the user according to the user information. The terminal may store a plurality of candidate product information therein, and the terminal may select a plurality of candidate product information therein for the current user according to the user information.
Specifically, the process of obtaining the plurality of candidate product information corresponding to the user information by the terminal may be: the terminal may obtain a plurality of candidate product information corresponding to the user information according to at least one of the historical purchase information, the historical browse information, and the historical purchase information and the historical browse information corresponding to other user information having a similarity greater than a threshold value.
In one possible implementation, the terminal may store historical purchase information and/or historical browse information of a plurality of user information, and when the terminal needs to recommend product information for a certain user, the terminal may recommend the product information according to the historical information of the user, and of course, the terminal may recommend the product information for the user according to the historical information of other users similar to the user. For example, if the user purchased a lipstick of a certain brand or a lipstick of a certain color number, the terminal may obtain various lipsticks of the brand or various lipsticks of the color number as a plurality of candidate product information, and of course, the terminal may also obtain other candidate product information for selection by the user.
The threshold may be preset by a related person, and the specific value of the threshold is not limited in the embodiment of the present invention.
The above only provides several implementation manners of obtaining the plurality of candidate product information corresponding to the user information, and the terminal may also obtain the plurality of candidate product information corresponding to the user information through other implementation manners, for example, different user information in the terminal may correspond to different product groups, for example, the user information indicates that the user is a male, the male corresponds to at least one product group, the at least one product group includes a plurality of candidate product information applicable to the male, and the terminal may obtain the plurality of candidate product information in the at least one product group.
In one possible implementation, when the terminal obtains a plurality of candidate product information, the plurality of candidate product information may also be displayed in the current interface. Further, the terminal may display the plurality of candidate product information in a target area in the current interface. For example, the terminal may display the plurality of candidate product information in a lower area of the current interface.
In a specific possible embodiment, the plurality of candidate product information may be displayed in groups, specifically, the groups of the plurality of candidate product information may be determined based on the types of products corresponding to the plurality of candidate product information, may be determined based on brands of products corresponding to the plurality of candidate product information, may be determined based on some information in the plurality of candidate product information, for example, in groups, the products may include lipstick, eye shadow, blush, concealer, eyebrow pencil, hair clip, earrings, coat, trousers, skirt, shoes, caps, etc., which are only described herein by way of example, and the embodiments of the present invention are not limited to the above groups, and the terminal may take the product information of each of the above groups of products as a group. For another example, the terminal may set product information of each brand of product, or the terminal may set product information of each color number of lipstick, and set product information of each color of eyebrow pencil.
In one possible implementation, the terminal may display a product menu in the target area, in which an identification of at least one product group, and at least one candidate product information in each product group may be displayed.
204. The terminal acquires target product information from the plurality of candidate product information.
After the terminal acquires the plurality of candidate product information, the terminal can select target product information from the candidate product information to carry out virtual trial. Specifically, the process of selecting the target product information from the plurality of candidate product information by the terminal may be implemented by the terminal according to a certain selection rule, or may be implemented based on user selection.
Specifically, the process of obtaining the target product information by the terminal can be realized by any one of the following implementation modes:
In the first mode, the terminal obtains the first candidate product information as the target product information according to the sequence of the candidate product information.
In the first mode, after the terminal obtains the plurality of candidate product information, the candidate product information arranged in the first candidate product information can be used as target product information, so that the following steps are executed, virtual trial of a target product corresponding to the target product information is realized, the candidate product information can be used as the target product information for virtual trial without selection of a user, the efficiency of virtual trial is improved, and the time of user operation is saved.
And secondly, the terminal acquires candidate product information corresponding to the product selection instruction as the target product information according to the product selection instruction.
In the second mode, after the terminal obtains the plurality of candidate product information, the plurality of candidate product information can be displayed, the user can select candidate product information which the user wants to try out from the displayed plurality of candidate product information, and product selection operation is performed on the terminal, and when the terminal obtains a product selection instruction triggered by the product selection operation, the terminal can take the candidate product information selected by the user as target product information, so that the following steps are executed to realize virtual try out.
And thirdly, randomly acquiring one piece of candidate product information from the plurality of pieces of candidate product information by the terminal as the target product information.
In the third mode, after the terminal acquires the plurality of candidate product information, the terminal may randomly select one candidate product information for virtual trial. The random acquisition process may be implemented by any random algorithm, which is not limited in the embodiment of the present invention.
It should be noted that, the above three obtaining manners of the target product information are provided, the terminal may also obtain the target product information from the plurality of candidate product information by adopting other obtaining manners, for example, the candidate product information with the largest matching degree may be obtained as the target product information based on the matching degree between the plurality of candidate product information and the user information, and the obtaining manner of the target product information is not limited in the embodiment of the present invention.
The steps 203 and 204 are processes of obtaining target product information from a plurality of candidate product information corresponding to the user information, and the terminal may recommend a plurality of candidate product information for the user based on the user information, and then may obtain the target product information from the plurality of candidate product information, so as to perform the following steps for virtual trial. In one possible implementation manner, the terminal may display a plurality of preset candidate product information without acquiring the user information, and when executing step 202, select one candidate product information for virtual trial by adopting any one of the above-mentioned target product information acquisition manners.
It should be noted that, the above step 202 may be performed simultaneously in the process of performing the step 203 and the step 204, or may be performed before performing the step 203 and the step 204, or may be performed after the step 203 and the step 204 are performed by the terminal, and the embodiment of the present invention does not limit the execution timing of the step 202.
205. And the terminal displays a second image corresponding to the first image according to the first image and the target product information.
When the terminal acquires the target product information and also acquires the first image, the terminal can process the first image based on the target product information so as to display the second image. The second image is used for reflecting the effect that the person in the first image applies the target product corresponding to the target product information.
Specifically, the positions of different products may be different when the products are applied on a person, for example, lipstick may be applied to lips, blush may be applied to cheeks, coat may be applied to an upper body, shoes may be applied to feet, etc., so that the terminal may acquire the positions corresponding to the products first and then process the corresponding positions of the first image. The process of displaying the second image by the terminal in step 205 may also be implemented in a plurality of manners, which are described in the following manner in the first and second manners, and the terminal may display the second image in either one of the first and second manners, or may also use other manners, which are not limited by the embodiment of the present invention.
According to the first image and the target product information, the terminal determines the position information of a product image corresponding to the target product information, wherein the product image is an image when a product is applied to a person in the first image; the terminal displays the first image, and displays a product image corresponding to the target product information on a position indicated by the position information in the first image.
When the terminal performs virtual test, the first image and the product image corresponding to the target product information can be combined and displayed, so that the effect that the corresponding product is applied to the human body is achieved. Different target product information may correspond to different product images, and the terminal may acquire a product image corresponding to the target product information according to the target product information. For example, a red-bean lipstick may correspond to a pixel point in an image with one or more pixel values that do not differ by more than a threshold value.
In one possible implementation manner, the target product information may also correspond to human body part information, where the human body part information is used to indicate a part where a target product corresponding to the target product information is applied in a person, and the terminal may obtain the human body part information corresponding to the target product information, and determine, based on the human body part information and the first image, position information corresponding to the product image in the first image. For example, the position information corresponding to the lipstick is the area where the lips are located in the first image, and the position information corresponding to the blush is the area where the cheeks are located in the first image.
Further, the shape and size of the human body part of each person may be different, so that the terminal needs to consider the shape and size of the corresponding part of the face in the first image when determining the position information of the product image, and can take the information of the position of the corresponding part of the person in the first image as the position information of the product image. Specifically, the terminal may detect the human body part of the first image, determine the coordinates of the human body part corresponding to the product image in the first image, and of course, the position information may also be represented in other manners, for example, a relative position with a certain position point in the first image, which is not limited in the embodiment of the present invention.
The terminal acquires a first image and also acquires a product image, and determines the position of the product image in the first image, and the terminal can display the first image and the product image, wherein for the product image, the terminal can display the product image at a corresponding position in the first image based on the acquired position information.
A second mode is that the terminal generates a second image corresponding to the first image according to the first image and a product image corresponding to the target product information, wherein the product image refers to an image when a product is applied to a person in the first image; the terminal displays the second image.
After the terminal acquires the first image and the target product information, the terminal can acquire the product image corresponding to the target product information, and then generate a second image based on the first image and the product image, wherein the second image is the image obtained after the product image is applied to the first image. Specifically, the terminal may also acquire the position information of the product image, and determine the position of the product image in the first image, so that when the second image is generated, image processing may be performed according to the position information. Of course, there is another possible implementation manner, the terminal may determine the position information of the first image and the product image in the second image, so as to draw based on the position information of the first image and the product image, and obtain the second image. The embodiment of the invention is not limited to which implementation mode is adopted specifically.
In one possible implementation manner, the first image may be a Three-dimensional (3D) image, and when the terminal generates the second image, the terminal may perform interpolation processing on the first image based on the product image to obtain the second image.
The third mode is that the terminal processes the first image according to the pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing the pixel value of the pixel point corresponding to the product and the distribution information of the pixel point; the terminal displays the second image.
In the third mode, different product information may correspond to different pixel information, and after the terminal obtains the target product information, the terminal may obtain the pixel information corresponding to the target product information, so that the pixel information is used as a data basis for processing the first image, and the pixel value of the pixel point in the first image is processed to obtain the second image.
Specifically, the pixel information may include a pixel value and distribution information of a pixel point, and the terminal may determine, based on the first image, location information corresponding to the target product information, so that the pixel point at a location indicated by the location information in the first image may be processed based on the pixel information, to obtain the second image.
It should be noted that, the above process of displaying the second image by the terminal is only described by three manners, and the process may be implemented in other manners, which is not limited by the embodiment of the present invention. The terminal comprehensively acquires the image comprising the person and the image when the product is applied on the person, or processes the acquired image based on the related information of the product, so that the effect that the person in the image applies the target product corresponding to the target product information is reflected, namely, the virtual trial effect is achieved. The process can be realized by adopting an augmented reality (Augmented Reality, AR) technology, and the makeup product is virtually painted on the face in the shot image of the user or the clothes are virtually worn on the person in the image, so that the user can know the actual application effect of the makeup product or the clothes directly based on the second image displayed by the terminal, painting or wearing of the entity product is not needed, the painting or wearing time can be saved, and the trial efficiency is effectively improved.
In one possible implementation manner, the screen of the terminal may be a mirror screen, so that the color rendition of the image displayed by the terminal is higher, the brightness is higher, and the display effect is better.
In a specific possible embodiment, the terminal may further provide a trial adjustment function, and the user may perform an adjustment operation to adjust the application effects of the product on the person to different extents. For example, the effect may be different for a lipstick when it is applied again and when it is applied lightly, the colour after application, the degree of gradient, etc. may be different. When the terminal acquires the adjustment instruction triggered by the adjustment operation, the terminal can execute corresponding steps to adjust the display effect of the corresponding position of the product in the second image.
Specifically, the terminal may obtain the transparency corresponding to the target product information according to the adjustment instruction. Accordingly, in the above three modes, the transparency may also be referred to in the process of displaying the product image corresponding to the target product information by the terminal or in the process of displaying the second image by the terminal.
In the first aspect, the step of displaying, by the terminal, the product image corresponding to the target product information at the position indicated by the position information in the first image may be: and the terminal displays the product image corresponding to the target product information according to the transparency at the position indicated by the position information in the first image. In the second aspect, the step of displaying the second image by the terminal may be: the terminal displays the first image in the second image, and displays the product image in the second image according to the transparency. In the third aspect, the step of displaying the second image by the terminal may be: and displaying the pixel point corresponding to the first image in the second image by the terminal, and displaying the pixel point corresponding to the target product information according to the transparency.
Therefore, the transparency of the product image or the corresponding pixel point corresponding to the target product information is changed, and the applied degree of the product can be seen from the second image displayed by the terminal, so that the display effect of the second image is better. Through providing the adjustment function of trying, can make things convenient for the user to learn the multiple effect when the product is applied fast, conveniently to the user is based on this multiple effect, and whether further select this product of purchase is needed.
206. And when a numerical value transfer instruction corresponding to the target product information is acquired, the terminal acquires the face image.
Besides the above image acquisition, face recognition and virtual trial functions, the terminal can provide a service related to value transfer, when the terminal displays a second image or displays a plurality of candidate product information, a user wants to purchase a product corresponding to a currently trial product or a certain candidate product information in the candidate product information, a value transfer operation can be performed in the terminal, and when the terminal acquires a value transfer instruction triggered by the value transfer operation, a corresponding confirmation step can be performed to trigger a value transfer request to be sent to a server to request to perform corresponding service processing.
In one possible implementation, when the terminal displays the second image or displays a plurality of candidate product information, a value transfer button may also be provided, and the user may perform a touch operation on the value transfer button, so that the terminal may obtain a value transfer instruction, and perform step 206. In yet another possible implementation, the terminal may further provide an add button in the interface, where the add button is used to take the target product information as the product information to be subjected to the numerical transfer.
For example, when the user selects a lipstick, the terminal may display the second image in the interface, display the effect after the user smears the lipstick, and display the information of the lipstick and an immediate purchase button in the target area in the interface, where the immediate purchase button is a numerical value transfer button, and the user may perform touch operation on the immediate purchase button, and the terminal may obtain a numerical value transfer instruction to perform the following numerical value transfer steps. In still another possible implementation manner, when the terminal displays the second image, a shopping cart adding button may be displayed in the interface, the user may also perform a touch operation on the shopping cart adding button, and the terminal may use the information of the lipstick as one item of product information to be subjected to numerical transfer based on the touch operation.
Specifically, the terminal can support Face payment (Face Pay), and when a value transfer instruction is acquired, the terminal can acquire a Face image, so that the Face image is identified, whether the current Face image is matched with a Face image corresponding to a value transfer account or not is determined, if so, the current user can be confirmed to have the authority of using the value transfer account to transfer the value, and if not, the current user cannot use the value transfer account to transfer the value. The face recognition payment mode is provided, a user does not need to input a payment password or use other terminals to pay, and the service processing efficiency can be effectively improved.
The value transfer account may be a value transfer account associated with the user information acquired by the terminal, for example, the user may bind the value transfer account when registering the user information, and in step 206, the terminal may acquire the value transfer account associated with the user information, so as to perform identity verification on the acquired face image based on the value transfer account. In another possible implementation manner, the value transfer account may also be determined according to an account setting instruction, a user may input or select a value transfer account from a plurality of value transfer accounts associated with the user information on the terminal, and the terminal may perform identity verification on the collected face image based on the value transfer account.
In one possible implementation manner, in the above steps, the terminal obtains the user information, provides the virtual trial function, and performs the authentication during the value transfer, which can be implemented by face recognition, so that the user does not need to manually perform excessive operations on the terminal, and the terminal can provide multiple functions, thereby effectively reducing the user operation, reducing the complexity of the user operation, and improving the efficiency of the overall business processing flow.
207. The terminal compares the face image with the face image corresponding to the numerical value transfer account associated with the user information, and when the face image and the face image corresponding to the numerical value transfer account meet the target condition, the terminal executes step 208; when the face image and the face image corresponding to the numerical value transfer account do not meet the target condition, the terminal executes step 209.
The terminal may obtain a face image corresponding to the value transfer account, where the face image corresponding to the value transfer account is used to perform identity verification on the collected face image, and the terminal may execute the step 207 to compare the face image collected in the step 206 with the face image corresponding to the value transfer account.
If the comparison result indicates that the face image corresponding to the value transfer account and the collected face image are face images of the same person, it can be verified that the current user can use the value transfer account to transfer the value, and the current user authentication is successful, so that the terminal can execute step 208 described below, if the comparison result indicates that the face image corresponding to the value transfer account and the collected face image are face images of different persons, the current user does not have the value transfer authority of the value transfer account, and the current user authentication fails, and the terminal can execute step 209 described below.
Specifically, the terminal may perform feature extraction on the collected face image and the face image corresponding to the numerical value transfer account, and compare the extracted features to obtain the similarity or matching degree of the collected face image and the face image corresponding to the numerical value transfer account. When the similarity or the matching degree is greater than the similarity threshold or the matching degree threshold, the terminal may perform step 208, that is, the target condition is that the similarity or the matching degree of the collected face image and the face image corresponding to the digital transfer account is greater than the similarity threshold or the matching degree threshold, and when the similarity or the matching degree is less than or equal to the similarity threshold or the matching degree threshold, the terminal may perform step 209 described below.
The above description is given by taking the target condition as an example that the similarity or matching degree of the face image collected and the face image corresponding to the numerical transfer account is greater than the similarity threshold or the matching degree threshold, and the target condition may be set or adjusted by a related technician based on the use requirement.
208. And the terminal sends a value transfer request to the server.
The value transfer request is used for indicating the server to execute the business processing corresponding to the value transfer request. And after confirming the identity of the current user and determining that the current user can use the value transfer account to transfer the value, the terminal can initiate a value transfer request and provide a value transfer service by the server.
Specifically, the value transfer request may carry identification information of the value transfer account, a value transfer amount, and the like, where the value transfer request is used to instruct the server to transfer the value transfer amount from the value transfer account to the value transfer account. Of course, the value transfer request may also carry other information, and the embodiment of the present invention does not limit the specific content carried by the value transfer request.
The server receives the value transfer request, can check the value transfer request, and can execute corresponding business processing when the check is passed, namely transferring the value transfer amount from the value transfer account to the value transfer account. After the server finishes the value transfer, a value transfer success message can also be sent to the terminal. After receiving the message of successful value transfer, the terminal can also display prompt of successful value transfer. Of course, when the value transfer is successful, the user can obtain the target product corresponding to the target product information.
209. The terminal displays a selection prompt, wherein the selection prompt is used for prompting to select in the mode of carrying out face recognition again and other numerical value transfer, and when a face recognition instruction is acquired, the terminal executes step 206 and step 207; when other numerical transfer instructions are obtained, the terminal performs step 210.
When the face image and the face image corresponding to the numerical value transfer account do not meet the target condition, the current user identity verification fails, and the terminal can provide more numerical value transfer modes so as to avoid that the numerical value transfer fails and products cannot be purchased.
Specifically, the terminal may display a selection prompt to prompt the user to select from a plurality of numerical transfer modes, and the user may select to re-perform face recognition, or may select other numerical transfer modes, for example, scan an identification code, input an account number password, or the like. If the user chooses to re-perform face recognition, the terminal can acquire a face recognition instruction triggered by the selection operation, and then the terminal can execute the steps 206 and 207 again; if the user selects another value transfer mode, the terminal may obtain another value transfer instruction triggered by the selection operation, and the terminal may perform step 210 described below to implement a further value transfer step.
210. The terminal obtains the value transfer information required by the value transfer mode corresponding to the value transfer instruction, and based on the value transfer information, the terminal executes step 208.
The number transfer information required by different number transfer modes may be different, for example, the mode of scanning the identification code needs to obtain the identification code of the user, and the mode of inputting the account number password needs to obtain the password of transferring the number out of the account. If the user selects another value transfer mode, the terminal needs to acquire the value transfer information required by the value transfer mode selected by the user, so as to perform authentication on the current user based on the value transfer information, thereby determining whether the value transfer can be performed, and if the authentication is successful, the terminal can execute step 208. Of course, if the authentication fails, the terminal may also perform step 209 described above.
The steps 206 to 210 are a process of sending a value transfer request to the server when the value transfer instruction corresponding to the target product information is obtained, in which the terminal can directly request the value transfer service to the server based on the value transfer requirement of the user on the basis of providing virtual trial, the trial and the value transfer links are continuous, no manual participation is required, the labor cost can be effectively reduced, and the service processing efficiency is improved.
It should be noted that only one possible implementation manner is provided in the above process, in this implementation manner, when a value transfer instruction corresponding to the target product information is acquired, the terminal acquires the face image, so that identity verification is performed based on the newly acquired face image, and thus the accuracy of the identity verification can be effectively ensured by acquiring the face image again for identity verification.
In another possible implementation, the above steps 206 and 207 may be: when a value transfer instruction corresponding to the target product information is acquired, the terminal can compare the acquired face image or the first image with the face image corresponding to the value transfer account associated with the user information. The terminal can directly perform identity verification based on the currently acquired first image or the face image acquired by the user before virtual trial without re-acquiring the face image, so that the efficiency of the whole business processing flow can be effectively improved. Accordingly, in this implementation, the step 208 may be: when the collected face image and the face image corresponding to the numerical value transfer account meet the target condition, the terminal sends a numerical value transfer request to the server; or when the face image corresponding to the first image and the numerical value transfer account meets the target condition, the terminal sends a numerical value transfer request to the server.
In this other possible implementation manner, the terminal may fail in authentication when passing through the currently acquired first image or the previously acquired face image, and in a specific possible embodiment, when the acquired face image and the face image corresponding to the numerical value transfer account do not meet the target condition, or when the face image corresponding to the numerical value transfer account and the first image do not meet the target condition, the terminal may also execute the steps 209 and 210. Of course, after step 209, when the terminal acquires the face recognition instruction, the terminal may also perform steps 206 and 207 described above, that is, when the terminal fails to perform the authentication based on the first image or based on the face image acquired before, the terminal may also provide more payment methods and options for re-authentication, and if the user selects re-authentication, the terminal may re-acquire the face image for performing the authentication. The business processing flow corresponding to the implementation manner may also refer to the embodiment shown in fig. 5 below, and the embodiment of the present invention will not be described herein.
For example, as shown in fig. 3, when the terminal is not used by a user, the terminal may be in a standby state, and when the terminal detects that a person is in the target range, the terminal may start the image acquisition unit to acquire a face and perform face recognition on the acquired face image, and of course, the terminal may perform face recognition based on the acquired image when a person is in the target range. The terminal can judge whether the current user is a member, if so, the terminal can acquire the member information of the member from the member system and display the member information, and if not, the terminal can provide an interface for registering the member, for example, the user can register by using own mobile phone number, and the terminal can synchronize the registered member information to the member system. Taking the product as a cosmetic product as an example, after the terminal performs face collection, the terminal can enter trial makeup, the user can select goods, the terminal can process images based on the selected goods, the makeup effect after the goods are tried out is displayed, when the user wants to purchase the goods, the user can select the goods, for example, the goods can be added into a shopping cart first and then payment is determined, the terminal can support face recognition payment, if the face recognition is successful, the terminal can finish the payment based on the payment service of the server, and if the face recognition is failed, the terminal can also provide a mode of re-performing face recognition payment or other payments, for example, code scanning payment.
In one possible implementation, after the step 208, the terminal may further update the historical purchase information and/or the historical browsing information corresponding to the user information. In this way, when the user performs virtual try again, the corresponding multiple candidate product information can be obtained based on the updated user information, and when other users perform virtual try, if the similarity between the other users and the user is greater than a threshold value, the terminal can also perform the step of obtaining the candidate product information based on the historical purchase information and/or the historical browsing information corresponding to the user information of the user.
In a specific possible embodiment, the step of detecting that the target range includes a person, the step of acquiring the face image, and the step of acquiring the image in the first image are performed based on the same camera. Of course, if in the above various implementations, the other image capturing steps performed by the terminal may be implemented based on the camera, for example, the step of capturing the face image by the terminal in step 206 may also be implemented based on the camera. That is, the image acquisition step included in the service processing method is implemented based on the same camera.
In the related art, the above-mentioned steps of detecting that the target range includes a person, the face image acquiring process, and acquiring a plurality of acquired images in the first image are generally implemented by a plurality of cameras, for example, whether the target range includes a person is implemented by the first camera, face image acquiring and identifying are implemented by the second camera, and acquiring the first image is implemented by the third camera.
In the embodiment of the present invention, all the steps may be implemented by the same camera. The terminal may store a corresponding configuration file, and based on the configuration file, the terminal may control the camera to perform the above steps. In one possible implementation manner, the camera provided by the embodiment of the invention has the functions of ranging, collecting image colors, supplementing luminosity and detail, so that the steps can be realized through a single camera, and the definition and the display effect of the collected image can be ensured. That is, by upgrading the software and the hardware, the embodiment of the invention realizes the integration of a plurality of cameras, and the cameras can be embedded in the terminal, so that the hardware cost is reduced, the hardware installation program is simplified, and the appearance of the terminal is beautified while the functions are realized.
For example, in one specific example, the configuration file may be three software development kits, that is, the steps of collecting the images may be implemented based on three software development kits (Software Development Kit, SDKs), and when it is required to perform a certain image collection step, the terminal may call the corresponding SDK to perform the corresponding image collection step. Specifically, when a person is detected to be included in the target range, the terminal calls the first SDK to identify the acquired face image. After identification, the terminal can close the first SDK, call the second SDK, execute the step of acquiring the first image, and when acquiring a numerical value transfer instruction corresponding to the target product information, the terminal can close the second SDK, call the third SDK, acquire a face image and identify the face image.
The specific flow of the above service processing method is described below by a specific example, as shown in fig. 4, in which the product is taken as an example of a cosmetic product, when the user does not use the terminal, the terminal may be in a standby state, the screen is an initial screen, when a person is detected in a target range, the image acquisition unit may be started to perform image acquisition and face recognition, at this time, the terminal may display an image acquisition interface and a face recognition interface, the terminal may perform virtual trial makeup, the terminal may display a cosmetic screen, the user may select cosmetic product information displayed by the terminal, the terminal may process the acquired face image by using an AR technology based on the cosmetic product information selected by the user, display an image after the user has applied the cosmetic product, and the user may also perform a sliding operation at the terminal to adjust the degree of the makeup reaction, for example, if the product is lipstick, the user may slide left and right, the degree of the lipstick application in the image becomes lighter, the color becomes lighter, the degree of the makeup reaction increases, and the degree of the lipstick application in the image becomes darker.
The user can perform touch operation on the anti-purchase button, if the user has added some cosmetic product information into the shopping cart before, the product information has a plurality of items, and the plurality of items of product information can comprise the cosmetic product information which is added into the shopping cart at present and the cosmetic product information which is added into the shopping cart before; if the user does not add the cosmetic product information to the shopping cart before, the product information has a single item, and the single item of product information is the cosmetic product information which is added to the shopping cart currently. After the user confirms the payment, the terminal can support the face payment, if the face recognition is successful, the terminal can display a payment confirmation interface, after the user selects the confirmation, the terminal can send a payment request to the server, if the face recognition is failed, the terminal can prompt the user to fail the face recognition and prompt the user to select a re-identification and code scanning payment mode, if the code scanning payment is selected, the terminal can display an identification code, the user can execute corresponding code scanning operation and the like, and when the user identity is confirmed successfully, the terminal can also send the payment request to the server.
According to the embodiment of the invention, when the person is detected, the person is identified, the corresponding user information is determined, candidate product information can be recommended based on the user information, and then the image can be acquired, so that the image after the person in the image applies the target product can be displayed by combining the target product information on the basis of the image, and virtual trial is realized. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, user information can be determined in the process in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is also provided, the entity product is not needed to be painted or worn, the selecting, trial and value transfer links can be integrally realized, manual participation is not needed, labor cost is reduced, and meanwhile, the efficiency of the service processing method can be effectively improved.
The flow of the service processing method is described in the embodiment shown in fig. 2, and an identity verification manner is provided in steps 206 to 210: when a numerical value transfer instruction corresponding to the target product information is acquired, the terminal acquires a face image, so that identity verification is performed based on the newly acquired face image. In one possible implementation manner, the terminal can directly perform identity verification based on the currently acquired first image or the face image acquired by the user before virtual trial without re-acquiring the face image, so that the efficiency of the overall business processing flow can be effectively improved.
The following describes the overall flow of service processing corresponding to the implementation manner by using the embodiment shown in fig. 5, fig. 5 is a flowchart of a service processing method provided by the embodiment of the present invention, and referring to fig. 5, the method may include the following steps:
501. when the target range is detected to contain a person, the terminal identifies the acquired face image and acquires user information corresponding to the face image.
502. The terminal acquires a first image.
503. And the terminal acquires a plurality of candidate product information corresponding to the user information based on the user information.
504. The terminal acquires target product information from the plurality of candidate product information.
The steps 503 and 504 are processes of obtaining target product information from a plurality of candidate product information corresponding to the user information.
505. And the terminal displays a second image corresponding to the first image according to the first image and the target product information.
The steps 501 to 505 agree with the steps 201 to 205 in the embodiment shown in fig. 2, and the description of the embodiment of the present invention is omitted here.
506. When a numerical value transfer instruction corresponding to the target product information is acquired, the terminal compares the acquired face image or the first image with the face image corresponding to the numerical value transfer account associated with the user information.
When the collected face image and the face image corresponding to the numerical value transfer account meet the target condition, or when the face image corresponding to the first image and the numerical value transfer account meet the target condition, the terminal executes step 507;
When the collected face image and the face image corresponding to the numerical value transfer account do not meet the target condition, or when the face image corresponding to the first image and the numerical value transfer account do not meet the target condition, the terminal executes step 508.
In step 506, when the terminal obtains the numerical value transfer instruction, the terminal can directly perform identity verification based on the currently collected first image or the face image collected by the user before performing virtual trial without re-collecting the face image, so that the efficiency of the overall business processing flow can be effectively improved. Specifically, this step 506 may include two cases:
First case: when a numerical value transfer instruction corresponding to the target product information is acquired, the terminal compares the acquired face image with a face image corresponding to a numerical value transfer account associated with the user information.
Corresponding to the first case, when the collected face image and the face image corresponding to the numerical value transfer account meet the target condition, the terminal executes step 507; when the collected face image and the face image corresponding to the numerical value transfer account do not meet the target condition, the terminal executes step 508.
Second case: when a numerical value transfer instruction corresponding to the target product information is acquired, the terminal compares the first image with a face image corresponding to a numerical value transfer account associated with the user information.
Corresponding to the second case, when the first image and the face image corresponding to the numerical value transfer account meet the target condition, the terminal executes step 507; when the face image corresponding to the first image and the numerical value transfer account does not meet the target condition, the terminal executes step 508.
In the two cases, the process of comparing the face image by the terminal is the same as the comparison process shown in the step 207, which is not repeated in the embodiment of the present invention.
507. And the terminal sends a value transfer request to the server.
The step 507 is similar to the step 208, and the description of the embodiment of the present invention is omitted here.
508. The terminal displays a selection prompt, wherein the selection prompt is used for prompting to select in the mode of carrying out face recognition and other numerical value transfer again, and when a face recognition instruction is acquired, the terminal executes a step 509; when other value transfer instructions are obtained, the terminal performs step 510.
This step 508 is similar to the step 209 described above, except that, when the face recognition instruction is acquired, in step 206 provided in the embodiment shown in fig. 2 described above, the terminal re-acquires the face image, so that when the face recognition instruction is acquired, the terminal may re-execute step 206. In the embodiment of the invention, the terminal directly uses the face image or the first image collected before to perform the authentication without re-collecting the face image, and when the authentication is performed and the user selects to perform the face recognition again, the terminal can execute the step 509 which is similar to the step 206, and collect the new face image to perform the authentication. Of course, in one possible implementation, when the face recognition instruction is obtained, the terminal may also execute step 506 described above, and perform face recognition again.
509. The terminal collects the face image, compares the face image with the face image corresponding to the numerical value transfer account, and executes step 507 when the face image and the face image corresponding to the numerical value transfer account meet the target condition; when the face image and the face image corresponding to the numerical value transfer account do not meet the target condition, the terminal executes step 508.
This step 509 is similar to step 206 and step 207 provided in the embodiment shown in fig. 2 and is not described herein.
510. The terminal obtains the value transfer information required by the value transfer mode corresponding to the value transfer instruction, and based on the value transfer information, the terminal executes step 507.
This step 510 is similar to the step 210 provided in the embodiment shown in fig. 2, and the description of the embodiment of the present invention is omitted here.
The steps 506 to 510 are processes of sending a value transfer request to the server when the value transfer instruction corresponding to the target product information is obtained.
According to the embodiment of the invention, the user information can be determined in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is also provided, painting or wearing of a physical product is not needed, the links of selection, trial and value transfer can be integrally realized, manual participation is not needed, the labor cost is reduced, and meanwhile, the efficiency of the service processing method can be effectively improved. Further, in the numerical value transferring link, a face recognition payment mode can be adopted, and face recognition can be carried out based on images collected during virtual trial or images collected during user information determination, so that the frequency of image collection can be reduced, the integration and fluency of a business processing flow are improved, and the efficiency of the business processing method is also effectively improved.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present invention, which is not described herein.
Fig. 6 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present invention, referring to fig. 6, the apparatus includes:
The acquiring module 601 is configured to identify an acquired face image when a person is detected to be included in the target range, and acquire user information corresponding to the face image;
The acquiring module 601 is further configured to acquire a first image;
The obtaining module 601 is further configured to obtain target product information from a plurality of candidate product information corresponding to the user information;
the display module 602 is configured to display, according to the first image and the target product information, a second image corresponding to the first image, where the second image is used to reflect an effect that a person in the first image applies a target product corresponding to the target product information;
And the sending module 603 is configured to send a value transfer request to a server when a value transfer instruction corresponding to the target product information is obtained, where the value transfer request is used to instruct the server to execute a service process corresponding to the value transfer request.
In one possible implementation, the obtaining module 601 is further configured to:
When the stored information is determined to comprise the user information corresponding to the face image according to the identification result, the user information corresponding to the face image is obtained from the stored information; or alternatively, the first and second heat exchangers may be,
And when the stored information is determined to not comprise the user information corresponding to the face image according to the identification result, acquiring the user information corresponding to the face image according to a user information setting interface.
In one possible implementation manner, the obtaining module 601 is further configured to obtain, based on the user information, a plurality of candidate product information corresponding to the user information;
The display module 602 is further configured to obtain target product information from the plurality of candidate product information.
In one possible implementation manner, the obtaining module 601 is further configured to obtain a plurality of candidate product information corresponding to the user information according to at least one of historical purchase information, historical browse information, and historical purchase information and historical browse information corresponding to other user information with a similarity greater than a threshold value of the user information;
Correspondingly, the device further comprises:
and the updating module is used for updating the historical purchase information and/or the historical browsing information corresponding to the user information.
In one possible implementation, the obtaining module 601 is further configured to:
acquiring first candidate product information as the target product information according to the sequence of the plurality of candidate product information; or alternatively, the first and second heat exchangers may be,
According to the product selection instruction, obtaining candidate product information corresponding to the product selection instruction as the target product information; or alternatively, the first and second heat exchangers may be,
From the plurality of candidate product information, one candidate product information is randomly acquired as the target product information.
In one possible implementation, the sending module 603 is configured to:
When a numerical value transfer instruction corresponding to the target product information is acquired, comparing the acquired face image or the first image with a face image corresponding to a numerical value transfer account associated with the user information;
when the collected face image and the face image corresponding to the numerical value transfer account meet the target condition, a numerical value transfer request is sent to a server; or when the face image corresponding to the first image and the numerical value transfer account meets the target condition, sending a numerical value transfer request to a server.
In one possible implementation, the sending module 603 is configured to:
When a numerical value transfer instruction corresponding to the target product information is acquired, acquiring a face image;
Comparing the face image with a face image corresponding to the numerical value transfer account associated with the user information;
And when the face image and the face image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to a server.
In one possible implementation, the display module 602 is further configured to:
When the collected face image and the face image corresponding to the numerical value transfer account do not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in the mode of carrying out face recognition again and other numerical value transfer; or alternatively, the first and second heat exchangers may be,
And when the face images corresponding to the first image and the numerical value transfer account do not meet the target conditions, displaying a selection prompt, wherein the selection prompt is used for prompting selection in the mode of carrying out face recognition again and other numerical value transfer modes.
In one possible implementation, the sending module 603 is further configured to:
When a face recognition instruction is acquired, executing a step of comparing the acquired face image or the first image with a face image corresponding to a numerical value transfer account associated with the user information; or executing the step of collecting the face image and comparing the face image with the face image corresponding to the account transferred by the numerical value associated with the user information; or alternatively, the first and second heat exchangers may be,
When other numerical value transfer instructions are obtained, obtaining the numerical value transfer information required by the numerical value transfer mode corresponding to the numerical value transfer instructions, and executing the step of sending the numerical value transfer request to the server based on the numerical value transfer information.
In one possible implementation, the display module 602 is configured to:
Determining the position information of a product image corresponding to target product information according to the first image and the target product information, wherein the product image refers to an image when a product is applied to a person in the first image; displaying the first image, and displaying a product image corresponding to the target product information at a position indicated by the position information in the first image; or alternatively, the first and second heat exchangers may be,
Generating a second image corresponding to the first image according to the first image and a product image corresponding to the target product information, wherein the product image refers to an image when a product is applied to a person in the first image; displaying the second image; or alternatively, the first and second heat exchangers may be,
Processing the first image according to pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing pixel values of pixel points corresponding to the product and distribution information of the pixel points; the second image is displayed.
In one possible implementation manner, the obtaining module 601 is further configured to obtain, according to an adjustment instruction, transparency corresponding to the target product information;
correspondingly, the display module 602 is configured to display, at a location indicated by the location information in the first image, a product image corresponding to the target product information according to the transparency; or alternatively, the first and second heat exchangers may be,
Accordingly, the display module 602 is configured to display a first image of the second image, and display a product image of the second image according to the transparency; or alternatively, the first and second heat exchangers may be,
Correspondingly, the display module 602 is configured to display a pixel corresponding to the first image in the second image, and display a pixel corresponding to the target product information according to the transparency.
In one possible implementation, the step of detecting that the target range includes a person, the step of acquiring the face image, and the step of acquiring the image in the first image are implemented based on the same camera.
According to the device provided by the embodiment of the invention, when the person is detected, the person is identified, the corresponding user information is determined, candidate product information can be recommended based on the user information, and then the image can be acquired, so that the image after the person in the image applies the target product can be displayed by combining the target product information on the basis of the image, and virtual trial is realized. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, user information can be determined in the process in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is also provided, the entity product is not needed to be painted or worn, the selecting, trial and value transfer links can be integrally realized, manual participation is not needed, labor cost is reduced, and meanwhile, the efficiency of the service processing method can be effectively improved.
It should be noted that: in the service processing device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the service processing device and the service processing method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the service processing device and the service processing method embodiment are detailed in the method embodiment, which is not repeated herein.
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor and a coprocessor, wherein the main processor is a processor for processing data in an awake state, and is also called a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the business processing methods provided by the method embodiments of the present invention.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera 706, audio circuitry 707, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 704 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present invention.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, providing a front panel of the terminal 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
A power supply 709 is used to power the various components in the terminal 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 700 further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch display screen 705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may collect a 3D motion of the user to the terminal 700 in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the terminal 700 and/or at a lower layer of the touch display screen 705. When the pressure sensor 713 is disposed at a side frame of the terminal 700, a grip signal of the user to the terminal 700 may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the touch display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually increases, the processor 701 controls the touch display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the terminal 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium, such as a memory including instructions executable by a processor to perform the business processing method of the above embodiment is also provided. For example, the computer readable storage medium may be Read-only Memory (ROM), random access Memory (Random Access Memory, RAM), compact disk Read-only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The service processing method is characterized by being applied to a terminal, wherein the terminal is a cosmetic mirror or a fitting mirror, and a screen of the terminal is a mirror screen, and the method comprises the following steps:
When the fact that a person is included in the target range is detected, the collected face image is recognized, and when the fact that user information corresponding to the face image is included in the stored information is determined according to the recognition result, the user information corresponding to the face image is obtained from the stored information; when the stored information is determined to not comprise the user information corresponding to the face image according to the identification result, acquiring the user information corresponding to the face image according to a user information setting interface; displaying the user information corresponding to the obtained face image; the user information is information provided when the user registers an account, and is used for representing the identity of the user;
Acquiring a first image in real time, wherein the first image comprises a person to be subjected to virtual trial;
displaying the acquired first image;
Acquiring a plurality of candidate product information corresponding to the user information according to at least one of historical purchase information, historical browsing information and historical purchase information corresponding to other user information, wherein the similarity of the historical purchase information and the historical browsing information is greater than a threshold value, and the historical purchase information and the historical browsing information correspond to the user information; determining candidate product information with the largest matching degree as target product information based on the matching degree of the plurality of candidate product information and the user information;
Acquiring human body part information corresponding to the target product information, wherein the human body part information is used for indicating the human body part of a target product corresponding to the target product information when the target product is applied in a human body; detecting the human body part of the first image, and determining the position information of the human body part corresponding to the product image corresponding to the target product information in the first image based on the shape and the size of the human body part where the target product is applied in the first image, wherein the product image refers to the image when the product is applied to the human body in the first image; displaying the first image, and displaying a product image corresponding to the target product information according to the set transparency at the position indicated by the position information in the first image to obtain a second image corresponding to the first image;
After the second image is displayed, according to an adjustment instruction triggered by sliding operation, acquiring transparency corresponding to the target product information, and displaying a product image corresponding to the target product information according to the transparency at a position indicated by the position information in the first image, wherein the sliding operation is used for adjusting the display effect of the target product in the second image in real time so that a user can see the change of the reaction degree of the target product on the first image in the second image which is currently displayed; wherein, when the sliding operation is sliding leftwards, the transparency corresponding to the target product information becomes larger, the color of the product image corresponding to the target product information displayed according to the transparency becomes lighter, and when the sliding operation is sliding rightwards, the transparency corresponding to the target product information becomes larger, and the color of the product image corresponding to the target product information displayed according to the transparency becomes darker;
Providing a value transfer button and an adding button when the second image or the plurality of candidate product information is displayed, determining to acquire a value transfer instruction corresponding to the target product information in response to a touch operation performed by a user on the value transfer button, comparing the acquired face image or the first image with a face image corresponding to a value transfer account associated with the user information, and sending a value transfer request to a server when the acquired face image or the first image meets a target condition and the face image corresponding to the value transfer account, wherein the value transfer request is used for instructing the server to execute business processing corresponding to the value transfer request; when the collected face image or the face image corresponding to the first image and the numerical value transfer account does not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in the mode of carrying out face recognition again and other numerical value transfer; the adding button is used for taking the target product information as product information to be subjected to numerical value transfer;
After the sending the value transfer request to the server, the method further includes:
and updating the historical purchase information and/or the historical browsing information corresponding to the user information.
2. The method according to claim 1, wherein the method further comprises:
Acquiring first candidate product information as the target product information according to the sequence of the candidate product information; or alternatively, the first and second heat exchangers may be,
According to a product selection instruction, candidate product information corresponding to the product selection instruction is obtained and used as the target product information; or alternatively, the first and second heat exchangers may be,
And randomly acquiring one piece of candidate product information from the plurality of candidate product information as the target product information.
3. The method according to claim 1, wherein the method further comprises:
When a numerical value transfer instruction corresponding to the target product information is acquired, acquiring a face image;
comparing the face image with a face image corresponding to a numerical value transfer account associated with the user information;
and when the face image and the face image corresponding to the numerical value transfer account meet the target condition, sending the numerical value transfer request to the server.
4. A method according to claim 3, characterized in that the method further comprises:
When a face recognition instruction is acquired, executing the step of comparing the acquired face image or the first image with the face image corresponding to the account transferred by the numerical value associated with the user information; or executing the step of collecting a face image, and comparing the face image with a face image corresponding to a numerical value transfer account associated with the user information; or alternatively, the first and second heat exchangers may be,
And when other numerical value transfer instructions are acquired, acquiring numerical value transfer information required by a numerical value transfer mode corresponding to the numerical value transfer instructions, and executing the step of sending a numerical value transfer request to a server based on the numerical value transfer information.
5. The method according to claim 1, wherein the method further comprises:
Generating a second image corresponding to the first image according to the first image and the product image corresponding to the target product information; displaying the second image; or alternatively, the first and second heat exchangers may be,
Processing the first image according to pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing pixel values of pixel points corresponding to the product and distribution information of the pixel points; and displaying the second image.
6. The method of claim 5, wherein said displaying said second image accordingly comprises: displaying a first image in the second image, and displaying a product image in the second image according to the transparency; or alternatively, the first and second heat exchangers may be,
Accordingly, the displaying the second image includes: and displaying the pixel points corresponding to the first image in the second image, and displaying the pixel points corresponding to the target product information according to the transparency.
7. The method of claim 1, wherein the steps of detecting the presence of a person within the target range, capturing the face image, and capturing the captured image of the first image are performed on the basis of the same camera.
8. A business processing device, characterized in that the device is a cosmetic mirror or a fitting mirror, the screen of the device is a mirror screen, the device comprises:
The acquisition module is used for identifying the acquired face image when the person is detected to be included in the target range, and acquiring the user information corresponding to the face image from the stored information when the user information corresponding to the face image is determined to be included in the stored information according to the identification result; when the stored information is determined to not comprise the user information corresponding to the face image according to the identification result, acquiring the user information corresponding to the face image according to a user information setting interface; the user information is information provided when the user registers an account, and is used for representing the identity of the user;
A module for performing the steps of: displaying the user information corresponding to the obtained face image;
The acquisition module is further used for acquiring a first image in real time, wherein the first image comprises a person to be subjected to virtual trial;
A module for performing the steps of: displaying the acquired first image;
the acquisition module is further configured to acquire a plurality of candidate product information corresponding to the user information according to at least one of historical purchase information, historical browsing information, and historical purchase information and historical browsing information corresponding to other user information with a similarity greater than a threshold value of the user information; determining candidate product information with the largest matching degree as target product information based on the matching degree of the plurality of candidate product information and the user information;
The display module is used for acquiring human body part information corresponding to the target product information, and the human body part information is used for indicating the human body part of a target product corresponding to the target product information when the target product is applied in a human body; detecting the human body part of the first image, and determining the position information of the human body part corresponding to the product image corresponding to the target product information in the first image based on the shape and the size of the human body part where the target product is applied in the first image, wherein the product image refers to the image when the product is applied to the human body in the first image; displaying the first image, and displaying a product image corresponding to the target product information according to the set transparency at the position indicated by the position information in the first image to obtain a second image corresponding to the first image;
The display module is further configured to obtain transparency corresponding to the target product information according to an adjustment instruction triggered by a sliding operation after the second image is displayed, and display, according to the transparency, a product image corresponding to the target product information at a position indicated by the position information in the first image, where the sliding operation is used to adjust a display effect of the target product in the second image in real time, so that a user can see a change of a reaction degree of the target product in the first image in the second image displayed currently; wherein, when the sliding operation is sliding leftwards, the transparency corresponding to the target product information becomes larger, the color of the product image corresponding to the target product information displayed according to the transparency becomes lighter, and when the sliding operation is sliding rightwards, the transparency corresponding to the target product information becomes larger, and the color of the product image corresponding to the target product information displayed according to the transparency becomes darker;
Providing a value transfer button and an add button while displaying the second image or the plurality of candidate product information; the sending module is used for responding to the touch operation executed by a user on the numerical transfer button, determining to acquire a numerical transfer instruction corresponding to the target product information, comparing the acquired face image or the first image with a face image corresponding to a numerical transfer account associated with the user information, and sending a numerical transfer request to a server when the acquired face image or the first image and the face image corresponding to the numerical transfer account meet target conditions, wherein the numerical transfer request is used for indicating the server to execute business processing corresponding to the numerical transfer request;
The display module is further configured to display a selection prompt when the collected face image or the face image corresponding to the first image and the numerical value transfer account does not meet the target condition, where the selection prompt is used to prompt selection in a manner of re-performing face recognition and other numerical value transfer; the adding button is used for taking the target product information as product information to be subjected to numerical value transfer;
and the updating module is used for updating the historical purchase information and/or the historical browsing information corresponding to the user information.
9. A terminal comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the operations performed by the traffic processing method of any of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the operations performed by the business processing method of any of claims 1 to 7.
CN201910016186.2A 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium Active CN111415185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910016186.2A CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910016186.2A CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111415185A CN111415185A (en) 2020-07-14
CN111415185B true CN111415185B (en) 2024-05-28

Family

ID=71492578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910016186.2A Active CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111415185B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001303A (en) * 2020-08-21 2020-11-27 四川长虹电器股份有限公司 Television image-keeping device and method
CN112907804A (en) * 2021-01-15 2021-06-04 北京市商汤科技开发有限公司 Interaction method and device of access control machine, access control machine assembly, electronic equipment and medium
CN112818765B (en) * 2021-01-18 2023-09-19 中科院成都信息技术股份有限公司 Image filling identification method, device and system and storage medium
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206236156U (en) * 2016-11-21 2017-06-09 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
JP2018120527A (en) * 2017-01-27 2018-08-02 株式会社リコー Image processing apparatus, image processing method, and image processing system
CN108509466A (en) * 2017-04-14 2018-09-07 腾讯科技(深圳)有限公司 A kind of information recommendation method and device
CN108648061A (en) * 2018-05-18 2018-10-12 北京京东尚科信息技术有限公司 image generating method and device
CN108694736A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Image processing method, device, server and computer storage media
CN109034935A (en) * 2018-06-06 2018-12-18 平安科技(深圳)有限公司 Products Show method, apparatus, computer equipment and storage medium
CN109118465A (en) * 2018-08-08 2019-01-01 颜沿(上海)智能科技有限公司 A kind of examination adornment system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120232977A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Real-time video image analysis for providing targeted offers

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
CN206236156U (en) * 2016-11-21 2017-06-09 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
JP2018120527A (en) * 2017-01-27 2018-08-02 株式会社リコー Image processing apparatus, image processing method, and image processing system
CN108509466A (en) * 2017-04-14 2018-09-07 腾讯科技(深圳)有限公司 A kind of information recommendation method and device
CN108694736A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Image processing method, device, server and computer storage media
CN108648061A (en) * 2018-05-18 2018-10-12 北京京东尚科信息技术有限公司 image generating method and device
CN109034935A (en) * 2018-06-06 2018-12-18 平安科技(深圳)有限公司 Products Show method, apparatus, computer equipment and storage medium
CN109118465A (en) * 2018-08-08 2019-01-01 颜沿(上海)智能科技有限公司 A kind of examination adornment system and method

Also Published As

Publication number Publication date
CN111415185A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
US11678734B2 (en) Method for processing images and electronic device
KR102633572B1 (en) Method for determining watch face image and electronic device thereof
CN111415185B (en) Service processing method, device, terminal and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
KR102664688B1 (en) Method for providing shoot mode based on virtual character and electronic device performing thereof
CN109107155B (en) Virtual article adjusting method, device, terminal and storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN111368114B (en) Information display method, device, equipment and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN110796083B (en) Image display method, device, terminal and storage medium
CN113763228A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN111915305B (en) Payment method, device, equipment and storage medium
WO2022083257A1 (en) Multimedia resource generation method and terminal
CN111353946B (en) Image restoration method, device, equipment and storage medium
CN112989198B (en) Push content determination method, device, equipment and computer-readable storage medium
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN111061369B (en) Interaction method, device, equipment and storage medium
CN113469779A (en) Information display method and device
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN111597468B (en) Social content generation method, device, equipment and readable storage medium
CN113592874B (en) Image display method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025781

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant