CN111104854A - Evaluation information processing method and device, electronic device and image processing method - Google Patents

Evaluation information processing method and device, electronic device and image processing method Download PDF

Info

Publication number
CN111104854A
CN111104854A CN201911094237.XA CN201911094237A CN111104854A CN 111104854 A CN111104854 A CN 111104854A CN 201911094237 A CN201911094237 A CN 201911094237A CN 111104854 A CN111104854 A CN 111104854A
Authority
CN
China
Prior art keywords
image
target user
expression
evaluation information
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911094237.XA
Other languages
Chinese (zh)
Inventor
张莹
钱鸿强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Koubei Network Technology Co Ltd
Original Assignee
Zhejiang Koubei Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Koubei Network Technology Co Ltd filed Critical Zhejiang Koubei Network Technology Co Ltd
Priority to CN201911094237.XA priority Critical patent/CN111104854A/en
Publication of CN111104854A publication Critical patent/CN111104854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an evaluation information processing method, which comprises the following steps: acquiring an expression simulation image for simulating the real expression of a target user; and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object. The expression simulation image of the embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually reflected, and meanwhile, the accuracy of evaluation information is improved.

Description

Evaluation information processing method and device, electronic device and image processing method
Technical Field
The present application relates to the field of computer technologies, and in particular, to an evaluation information processing method, and also relates to an evaluation information processing apparatus, an electronic device, and a computer storage medium, and an image processing method and an image processing apparatus, and an electronic device and a computer storage medium.
Background
With the continuous development of internet technology, a plurality of internet-based service platforms, such as online meal ordering, online travel reservation, online hotel reservation and the like, are developed. The user can browse or consume the products in the shop connected with the platform through the platform, thereby providing convenience for the work and life of the user.
In recent years, many service platforms provide corresponding user evaluation functions for users, that is, users can evaluate products or merchants after the products are consumed, so that other subsequent users can select and consume the products according to evaluation information. However, in the current user evaluation information, the text information is usually used as the main evaluation information, that is, the current user evaluation information is formed by inputting text by the user. However, the text information sometimes cannot intuitively reflect the true attitude of the user with respect to the product or the merchant, and thus the accuracy of the evaluation information is reduced.
Therefore, how to improve the accuracy of the evaluation information becomes a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an evaluation information processing method, so that the accuracy of evaluation information is improved. The embodiment of the application also relates to an evaluation information processing method, two evaluation information processing devices and electronic equipment, a computer storage medium, an image processing method, an image processing device, electronic equipment and a computer storage medium.
The embodiment of the application provides an evaluation information processing method, which comprises the following steps: acquiring an expression simulation image for simulating the real expression of a target user; and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
Optionally, the obtaining of the expression simulation image for simulating the real expression of the target user includes: acquiring a dynamic image simulating the real expression of the target user; intercepting a static image of at least one frame from the dynamic image; and generating the expression simulation image according to the static image of the at least one frame.
Optionally, the method further includes: displaying the evaluation information display area; acquiring a first trigger operation of the target user in the evaluation information display area, wherein the first trigger operation is used for acquiring the expression simulation image; the acquiring of the expression simulation image for simulating the real expression of the target user comprises the following steps: and acquiring the expression simulation image aiming at the first trigger operation.
Optionally, the obtaining the expression simulation image according to the first trigger operation includes: displaying an interface for generating the expression simulation image aiming at the first trigger operation; acquiring a real expression image of the target user through an image acquisition device; generating the expression simulation image according to the real expression image of the target user; and displaying the expression simulation image on the interface.
Optionally, the method further includes: acquiring a second trigger operation of the target user on the interface, wherein the second trigger operation is used for acquiring a real expression image of the target user; the acquiring the real expression image of the target user through the image acquiring device comprises: and acquiring the real expression image of the target user aiming at the second trigger operation.
Optionally, the method further includes: acquiring a third trigger operation of the target user on the interface, wherein the third trigger operation is used for stopping acquiring the real expression image of the target user; the generating the expression simulation image according to the real expression image of the target user comprises the following steps: and generating the expression simulation image according to the acquired real expression image of the target user aiming at the third trigger operation.
Optionally, the method further includes: acquiring voice evaluation information of the target user aiming at the target object; converting the voice evaluation information into character evaluation information; and sending the character evaluation information to the first equipment, or displaying the character evaluation information in the evaluation information display area.
Optionally, the method further includes: and sending the voice evaluation information of the target user aiming at the target object to the first equipment, or playing the voice evaluation information in the evaluation information display area.
Optionally, the voice evaluation information is simulated voice evaluation information, and the simulated voice evaluation information is used for simulating real voice evaluation information of the target user for the target object.
Optionally, the method further includes: acquiring real voice evaluation information of the target user aiming at the target object; and generating the simulated voice evaluation information according to the real voice evaluation information.
Optionally, the method further includes: sending a request message for acquiring evaluation information aiming at the target object to second equipment; the acquiring of the expression simulation image for simulating the real expression of the target user comprises the following steps: and acquiring the expression simulation image provided by the second equipment.
Optionally, the method further includes: acquiring a fourth trigger operation for displaying evaluation information aiming at the target object; the sending of the request message for acquiring the evaluation information of the target object to the second device includes: and sending a request message for acquiring the evaluation information of the target object to the second equipment aiming at the fourth trigger operation.
Optionally, the expression simulation image includes a target user simulation carrier for simulating the target user, and a simulation expression feature for simulating a real expression feature of the target user is carried on the target user simulation carrier.
Optionally, the method further includes: and obtaining the target user simulation carrier selected by the target user from the displayed candidate simulation carriers, or obtaining the target user simulation carrier selected by the target user from the simulation carriers stored in the current equipment, or obtaining the target user simulation carrier edited by the target user in the current equipment.
The embodiment of the present application further provides an evaluation information processing method, including: obtaining a request message sent by third equipment and used for obtaining evaluation information aiming at a target object; and providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third device, wherein the expression simulation image is used for simulating the real expression of the target user.
Optionally, the method further includes: and obtaining the expression simulation image provided by the fourth device.
Optionally, the method further includes: acquiring character evaluation information, provided by the fourth device, of the target user for the target object; the providing, to the third device, an expression simulation image corresponding to evaluation information of a target user for the target object includes: and providing the expression simulation image and the character evaluation information to the third equipment.
Optionally, the method further includes: obtaining voice evaluation information, provided by the fourth device, of the target user for the target object; the providing, to the third device, an expression simulation image corresponding to evaluation information of a target user for the target object includes: and providing the expression simulation image and the voice evaluation information to the third equipment.
Optionally, the voice evaluation information is simulated voice evaluation information, and the simulated voice evaluation information is used for simulating real voice evaluation information of the target user for the target object.
An embodiment of the present application further provides an evaluation information processing apparatus, including: the expression simulation image acquisition unit is used for acquiring an expression simulation image used for simulating the real expression of the target user; and the expression simulation image processing unit is used for sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
An embodiment of the present application further provides an evaluation information processing apparatus, including: a request message acquiring unit configured to acquire a request message sent by a third device to acquire evaluation information for a target object; and the expression simulation image providing unit is used for providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third equipment, and the expression simulation image is used for simulating the real expression of the target user.
An embodiment of the present application further provides an electronic device, including: a processor; a memory for storing a computer program which, when read and executed by the processor, executes the above-described evaluation information processing method.
The embodiment of the present application further provides a computer storage medium, which stores a computer program, and the computer program is executed by a processor to execute the above evaluation information processing method.
An embodiment of the present application further provides an image processing method, including: acquiring a dynamic image simulating the real expression of a target user; intercepting a static image of at least one frame from the dynamic image; and generating an expression simulation image according to the static image of the at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user.
Optionally, the intercepting a static image of at least one frame from the dynamic image includes: obtaining an interception trigger operation aiming at the dynamic image; and intercepting a static image of at least one frame from the dynamic image according to the intercepting trigger operation.
Optionally, the intercepting a static image of at least one frame from the dynamic image includes: intercepting a static image of a first target frame from the dynamic image; selecting a static image of a second target frame located after the first target frame from the dynamic image; carrying out difference processing on the static image of the second target frame and the static image of the first target frame to obtain first difference data between the static image of the second target frame and the static image of the first target frame; selecting a static image of a third target frame located after the second target frame from the dynamic image; performing difference processing on the static image of the third target frame and the static image of the second target frame to obtain second difference data between the static image of the third target frame and the static image of the second target frame; in the same way, obtaining difference data between the static images of all the adjacent target frames; the generating an expression simulation image according to the static image of the at least one frame includes: and generating an expression simulation image according to the static image of the first target frame and the difference data between the static images of all the adjacent target frames.
Optionally, the obtaining of the dynamic image simulating the real expression of the target user includes: acquiring a dynamic image which is displayed in the information interaction application and simulates the real expression of a target user; the method further comprises the following steps: and displaying the expression simulation image in an evaluation information display area.
An embodiment of the present application further provides an image processing apparatus, including: the acquisition unit is used for acquiring a dynamic image simulating the real expression of a target user; an intercepting unit configured to intercept a still image of at least one frame from the dynamic image; and the generating unit is used for generating an expression simulation image according to the static image of the at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user.
The embodiment of the present application further provides a method including: a processor; a memory for storing a computer program which, when read and executed by the processor, performs the image processing method described above.
The embodiment of the present application further provides a computer storage medium, which stores a computer program, and the computer program is executed by a processor to execute the image processing method.
Compared with the prior art, the method has the following advantages:
the embodiment of the application provides an evaluation information processing method, which comprises the following steps: acquiring an expression simulation image for simulating the real expression of a target user; and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object. The expression simulation image of the embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually reflected, and meanwhile, the accuracy of evaluation information is improved.
The embodiment of the application further provides an evaluation information processing method, which comprises the following steps: obtaining a request message sent by third equipment and used for obtaining evaluation information aiming at a target object; and providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third device, wherein the expression simulation image is used for simulating the real expression of the target user. According to the embodiment of the application, the request message sent by the third device and used for obtaining the evaluation information aiming at the target object is obtained, the expression simulation image simulating the real expression of the target user can be used as the evaluation information and provided for the third device, and the real expression of the target user is really reflected after the target user consumes the target object, so that the simulated expression simulation image can accurately reflect the evaluation reality of the target user on the target object, and the accuracy of the evaluation information is improved.
An embodiment of the present application provides an image processing method, including: acquiring a dynamic image simulating the real expression of a target user; intercepting a static image of at least one frame from a dynamic image; and generating an expression simulation image according to the static image of at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user. The dynamic image of the embodiment of the application simulates the real expression of the target user, and the expression simulation image is generated by intercepting at least one frame of static image from the dynamic image, so that the expression simulation image capable of simulating the real expression of the target user is displayed on different terminal devices.
Drawings
FIG. 1 is a schematic diagram of an embodiment of an application scenario provided by the present application;
fig. 2 is a flowchart of an evaluation information processing method according to a first embodiment of the present application;
fig. 3 is a flowchart of a method for acquiring an expression simulation image according to a first embodiment of the present application;
fig. 4a is a first schematic diagram of obtaining an expression simulation image according to a first embodiment of the present application;
fig. 4b is a second schematic diagram of acquiring an expression simulation image according to the first embodiment of the present application;
fig. 4c is a third schematic diagram of obtaining an expression simulation image according to the first embodiment of the present application;
fig. 4d is a fourth schematic diagram of obtaining an expression simulation image according to the first embodiment of the present application;
fig. 5 is a flowchart of an evaluation information processing method according to a second embodiment of the present application;
fig. 6 is a schematic diagram of an evaluation information processing apparatus according to a third embodiment of the present application;
fig. 7 is a schematic logical structure diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of an evaluation information processing apparatus according to a fifth embodiment of the present application;
fig. 9 is a flowchart of an image processing method according to a ninth embodiment of the present application;
fig. 10 is a schematic diagram of an image processing apparatus according to a tenth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the present application. The embodiments of this application are capable of embodiments in many different forms than those described herein and can be similarly generalized by those skilled in the art without departing from the spirit and scope of the embodiments of this application and, therefore, the embodiments of this application are not limited to the specific embodiments disclosed below.
Some embodiments provided herein may be applied to scenarios of interaction between the user terminal 101 and the server 102. Fig. 1 is a schematic diagram of an embodiment of an application scenario provided in the present application. The user terminal 101 is firstly connected with the server 102, after the connection, the user terminal 101 is triggered to start a camera to scan the facial expression of the user, a model carrier selected by the user is displayed on a recording interface of the user terminal 101, the model carrier can simulate the current facial expression of the user, and after the recording interface of the user terminal 101 records the facial expression of the user simulated by the model carrier, a dynamic image simulating the real expression of the user is generated. The user terminal 101 intercepts the dynamic image according to a preset sampling rate, the dynamic image is collected into a plurality of image information in time, the collected image information is uploaded to the server 102, the server 102 processes the image information, a corresponding expression simulation image simulating the real expression of the user is generated and stored, the expression simulation image is fed back to the user terminal 101, the expression simulation image is displayed in an evaluation information display area of the user by the user terminal 101, and therefore the evaluation information can visually reflect the real attitude of the user to the product or the merchant, and the accuracy of the evaluation information can be improved.
The examples provided in this application are further described below.
A first embodiment of the present application provides an evaluation information processing method, and an application subject of the method may be a user terminal. Fig. 2 is a flowchart of an evaluation information processing method according to a first embodiment of the present application, and the method according to this embodiment is described in detail below with reference to fig. 2. The following description refers to embodiments for the purpose of illustrating the principles of the methods, and is not intended to be limiting in actual use.
As shown in fig. 2, the evaluation information processing method provided in this embodiment includes the steps of:
step S201, obtaining an expression simulation image for simulating the real expression of the target user.
In this embodiment, the user issues evaluation information for a product or a merchant, on the premise that the user consumes the product or truly evaluates the product after the merchant consumes the product, so that the authenticity of the evaluation information can be improved, and therefore, the target user of this embodiment is the user who currently consumes the product or consumes the product at the merchant. The simulation of the real expression of the target user refers to the simulation of the real expression of the target user after consuming the product or the merchant, for example, if the target user consumes the product or after consuming the product and is satisfied with the product by the merchant, the current real facial expression of the target user is in the mouth-corner-up state, the real expression of the target user is simulated in the mouth-corner-up state, so that the corresponding simulated expression simulation image is also in the mouth-corner-up state, and if the current real facial expression of the target user is in the eye-blink state, the real expression of the target user is simulated in the eye-blink state, so that the corresponding simulated expression simulation image is in the eye-blink state. That is, the expression simulation image simulated by the target user may be in a still state or an action state as long as the target user's real expression is simulated. The embodiment mainly simulates the real facial expression of the target user, and can also simulate the real body action of the target user, so that the real attitude of the target user after consuming the product or the merchant is truly and accurately reflected, and the accuracy of the evaluation information is further improved.
In this embodiment, the expression simulation image may be a dynamic image in a GIF (Graphics Interchange Format) Format, or may be a dynamic image in another standard Format, and the expression simulation image is set as the dynamic image in the standard Format so that the expression simulation image can be displayed on a plurality of terminal devices, so that more users can see and reflect evaluations of target users who consume the product or the merchant on the expression simulation image, and the expression simulation image simulates the current real facial expressions of the target users, so that the expression simulation image is more realistic, and further more users can know the real feelings of the target users on the evaluations, thereby improving the accuracy of the evaluation information and increasing the interaction experience of the target users and the users looking up the evaluation information.
In this embodiment, in order to obtain an expression simulation image for simulating a real expression of a target user, the expression simulation image needs to be obtained through the following three substeps 201-1 to 201-3, which will be described below with reference to fig. 3, where fig. 3 is a flowchart of a method for obtaining an expression simulation image according to a first embodiment of the present application.
Step S201-1, a dynamic image simulating the real expression of the target user is obtained.
The dynamic image simulating the real expression of the target user can be a video or an animation image. The format of the dynamic image simulating the real expression of the target user is a non-standard format which can be displayed only in a specific terminal device or a specific area designated by the terminal device. The user terminal or the server can generate the expression simulation image in the standard format only after processing the dynamic image simulating the real expression of the target user, so that the expression simulation image can be displayed on any terminal equipment.
Further, acquiring a dynamic image simulating the real expression of the target user specifically includes displaying, by the user terminal, an evaluation information display area corresponding to the product or the merchant, as shown in fig. 4a, and then entering an interface for acquiring the dynamic image simulating the real expression of the target user, as shown in fig. 4b, according to a trigger operation on the evaluation information display area. The user terminal obtains the facial expression of a target user through structured light, the structured light can be sent out by a camera, the target user can trigger an image obtaining simulation block in an interface, so that the user terminal obtains the trigger operation of the target user for obtaining the real facial expression of the target user, the structured light is projected to the face of the target user according to the trigger operation to obtain a facial structured light image, the phase information corresponding to a deformation position pixel in the facial structured light image is demodulated to convert the phase information into height information, and then a three-dimensional facial model of the target user corresponding to the facial structured light image is obtained according to the height information. And then, simulating and generating a dynamic image reflecting the real expression of the target user on a target user simulation carrier selected by the target user according to the three-dimensional facial model.
In this embodiment, the target user simulation carrier may be a 3D simulation carrier, and the 3D simulation carrier may be a cartoon character simulation carrier, an animal simulation carrier, a doll simulation carrier, or the like, for example, as shown in fig. 4b, the target user simulation carrier is a cartoon doll simulation carrier displayed in a recording area, and the target user simulation carrier is within the scope of the present embodiment as long as the target user simulation carrier does not infringe the privacy of the user. Of course, in other schemes of this embodiment, the target user simulation carrier may also be created by the target user, so that interactivity of the evaluation manner can be improved, and thus the target user increases the enthusiasm for evaluating the product or the merchant after consuming the product or the merchant, and further, other users can obtain more comprehensive evaluation information for the product or the merchant.
The method for generating the dynamic image reflecting the real expression of the target user by simulating the target user simulation carrier selected by the target user according to the three-dimensional facial model specifically comprises the following steps: extracting feature information in the face three-dimensional model, matching the feature information with a target user simulation carrier selected by a target user, so as to simulate the real expression of the target user on the target user simulation carrier, and recording the simulation process to generate a dynamic image of the real expression of the target user. That is, when the user terminal obtains the trigger operation on the image obtaining simulation block, the target user simulation carrier simulates the real facial expression of the target user according to the trigger operation, and at the same time, the simulation process of the target user simulation carrier located in the recording area is recorded, so as to obtain a dynamic image simulating the real facial expression of the target user, that is, the target user simulation carrier synchronously displays the facial expression same as the current facial expression of the target user, for example, the real facial expression of the current target user is in a mouth-corner-rising state, the facial expression simulated by the target user simulation carrier is in a mouth-corner-rising state, the real facial expression of the current target user is in a state of continuously blinking eyes, and the facial expression simulated by the target user simulation carrier is in a state of continuously blinking eyes. After the recording is finished, as shown in fig. 4c, the upload virtual button of the recording area may be directly triggered, and the sending operation for the interface may be acquired, so as to upload the dynamic image to the server or the evaluation information display area.
It should be noted that, after the dynamic image simulating the real expression of the target user is formed, the target user may also view the specific content of the dynamic image immediately, that is, the user terminal obtains the trigger operation for viewing the dynamic image, so as to play the dynamic image in the recording area, and if the target user is not satisfied with the dynamic image generated by the current recording, the dynamic image may be deleted and re-recorded. It should be further noted that, in this embodiment, the dynamic image of the target user simulation carrier simulating the real expression of the target user is limited by the recording parameter, where the recording parameter is specifically at least one of the following parameters: the capacity of the moving image, the duration of the moving image, and the scene of the moving image, etc. Specifically, in this embodiment, the upper limit of the capacity of the moving image may be set to 30M-50M, which not only ensures the definition of the moving image, but also facilitates the user terminal to transmit the moving image quickly. The upper limit range of the duration of the dynamic image can be 10s-15s, so that the dynamic image can be transmitted by the user terminal quickly, the target user can finish the evaluation operation of the target user on the consumed products or merchants in a short time, the cost of inputting evaluation information by the target user is reduced, and the evaluation experience of the target user is improved. The situation of the dynamic image simulated by the target user simulation carrier mainly refers to the health degree of the target user simulation carrier simulating the real facial expression of the target user, namely the situation of the simulated dynamic image needs to be healthy and active so as to transfer healthy and active energy to other users when the situation is displayed on other terminal equipment, and therefore the internet environment is purified.
After the dynamic image simulating the real expression of the target user is acquired through the step S201-1, the step S201-2 is executed to intercept at least one frame of static image from the dynamic image.
In the present embodiment, since the generated moving image is a moving image in a non-standard format, it is necessary to convert it into an image in a standard format for display on another terminal device. Specifically, firstly, the dynamic image is captured to obtain a static image of at least one frame in the dynamic image. That is, after recording the dynamic image, the user terminal intercepts and collects the dynamic image according to a predetermined sampling rate, wherein the predetermined sampling rate of the embodiment is 60hz, so as to obtain a static image of at least one frame in the dynamic image corresponding to at least one moment. Of course. In another scheme of this embodiment, after the target user finishes recording the dynamic image, the server may obtain a dynamic image recording end instruction, so as to send an interception request for intercepting the dynamic image to the user terminal, so that the user terminal obtains an interception request for intercepting the dynamic image by the server, where the interception request includes identification information of the target user, and the identification information may include ID information of the dynamic image, so as to intercept and collect the dynamic image according to the ID information of the dynamic image at a predetermined sampling rate, and further obtain a static image of at least one frame in the dynamic image corresponding to at least one moment. Therefore, the method reduces the operation of the target user, and improves the intercepting efficiency.
In this embodiment, after the complete dynamic image is captured and collected according to the predetermined sampling rate, the complete dynamic image can be generally captured into the static images of a plurality of frames, because the capacity of the dynamic image is small, and the predetermined sampling rate of the dynamic image is high, the change of the collected static images of adjacent frames is relatively small, then the collected and recorded static image of the first frame is used as the full data image, and then the difference between the static image of each frame and the static image of the previous frame is made, so that the data volume of the generated expression simulation image can be reduced, and the uploading is facilitated. Specifically, if the static image of the current frame is different from the static image of the previous frame, the static image of the current frame is recorded; if the static image of the current frame is the same as the static image of the previous frame, judging whether the static image of the next frame is the same as the static image of the current frame, and if not, recording the static image of the next frame; if the static images are the same, whether the static image of the next frame is the same as the static image of the next frame is judged, and the like, so that the continuous static images of a plurality of frames are recorded. Therefore, the smaller the data volume of the dynamic image for simulating the real expression of the target user, the better the data volume is, so as to ensure that the dynamic image is uploaded quickly.
And step S201-3, generating an expression simulation image according to the static image of at least one frame.
In this embodiment, if the dynamic image that is obtained to simulate the real expression of the target user is a non-changing dynamic image, the static image of one frame may be used to generate the expression simulation image, and if the dynamic image that is obtained to simulate the real expression of the target user is a changing dynamic image, the static image of one frame may be used to generate the expression simulation image, or the static images of a plurality of consecutive frames may be used to generate the expression simulation image, which is not limited in this embodiment. The generated expression simulation image is an image in a standard format, and the standard format can be a GIF format, so that the generated expression simulation image can be displayed on different terminal devices.
Step S202, sending the expression simulation image to a first device as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
After the expression simulation image used for simulating the real expression of the target user is obtained, the expression simulation image can be sent to the first device as an image corresponding to the evaluation information of the target user for the target object, and the expression simulation image can be stored in the first device as an image corresponding to the evaluation information of the target user for the target object, so that the expression simulation image can be backed up, and the expression simulation image can also be sent to other user terminals in time when other user terminals look up the image corresponding to the evaluation information of the target object, so that other users corresponding to other user terminals can know the real evaluation information after the product is consumed or the merchant consumes. In this embodiment, the first device is a server, the target object is a consumed product or a consumed merchant, and the target user is to evaluate the product or the merchant.
In this embodiment, the voice evaluation information of the target user for the target object may also be sent to the first device, where the voice evaluation information is simulated voice evaluation information, specifically, the real voice evaluation information of the target user for the target object is obtained, and the simulated voice evaluation information is generated according to the real voice evaluation information of the target user. The simulated voice evaluation information is used for simulating real voice evaluation information of a target user aiming at a target object, wherein the simulated voice evaluation information can be input through a target user simulation carrier when a user terminal generates an emotion simulation image, namely the target user sends the real voice evaluation information, and the expression of the target user simulation carrier in the simulated target user is synchronously input. Of course, the simulated speech evaluation information may also be directly entered, and will not be described in detail here. The voice evaluation information can enable other users to intuitively hear the real voice evaluation information of the target user aiming at the target object when the first device sends the voice evaluation information to other users corresponding to other user terminals. Furthermore, the voice evaluation information is converted into character evaluation information, the character evaluation information is sent to the first device, and when the character evaluation information is sent to other users corresponding to other user terminals by the first device, the other users can visually see character evaluation of the target user for the target object. Due to the introduction of the voice evaluation information and the character evaluation information, the accuracy of the evaluation information submitted by the target user is improved, and meanwhile, the diversity of the evaluation information submitted by the target user and aiming at the target object can be increased.
On the other hand, after the expression simulation image for simulating the real expression of the target user is obtained, the expression simulation image can be displayed in the evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
Specifically, please refer to fig. 4a, fig. 4b, fig. 4c, and fig. 4d, where fig. 4a is a first schematic diagram of obtaining an expression simulation image provided in the embodiment of the present application, fig. 4b is a second schematic diagram of obtaining an expression simulation image provided in the embodiment of the present application, fig. 4c is a third schematic diagram of obtaining an expression simulation image provided in the embodiment of the present application, fig. 4d is a fourth schematic diagram of obtaining an expression simulation image provided in the embodiment of the present application, and fig. 4a to fig. 4d schematically show a specific process from obtaining an expression simulation image for simulating a real expression of a target user to displaying the expression simulation image in an evaluation information display area in the embodiment. The present embodiment will be explained below with reference to the above drawings.
In this embodiment, when the target user evaluates that the product is consumed or after the product is consumed by the merchant, the target user needs to perform corresponding operations in the evaluation information display area of the user terminal. Specifically, as shown in fig. 4a, a user terminal displays an evaluation information display area, and obtains a first trigger operation of a target user in the evaluation information display area, where the first trigger operation is used to obtain an expression simulation image; namely, obtaining an expression simulation image for simulating the real expression of the target user, including: and acquiring an expression simulation image aiming at the first trigger operation. Specifically, for the first trigger operation, the user terminal displays an interface for generating an expression simulation image, as shown in fig. 4b, and the interface displays and generates the expression simulation image by the user. The method comprises the steps that a character evaluation information input area, a recording area and a triggerable image acquisition simulation block are distributed on the interface, and an image acquisition device on a user terminal can acquire a real expression image of a target user by acquiring the triggering operation of the image acquisition simulation block, wherein the image acquisition device is a camera and can emit structured light to the target user.
Further, by triggering the image acquisition simulation block on the interface, the user terminal acquires a second trigger operation of the target user on the interface, wherein the second trigger operation is used for acquiring a real expression image of the target user; namely, the method for acquiring the real expression image of the target user by the image acquisition device comprises the following steps: and acquiring the real expression image of the target user aiming at the second trigger operation. Specifically, the user terminal acquires a second trigger operation, so that structured light is projected to the face of a target user according to the second trigger operation to acquire a facial structured light image, phase information corresponding to a deformed position pixel in the facial structured light image is demodulated, the phase information is converted into height information, a facial three-dimensional model of the target user corresponding to the facial structured light image is acquired according to the height information, then, feature information in the facial three-dimensional model is extracted, the feature information is matched with a target user simulation carrier selected by the target user, so that the real expression of the target user is simulated on the target user simulation carrier, the simulated process is recorded, and a real expression image reflecting the real expression of the target user is generated. That is, when the user terminal obtains the trigger operation on the image obtaining simulation block, the target user simulation carrier simulates the real facial expression of the target user according to the trigger operation, and meanwhile, the simulation process of the target user simulation carrier located in the recording area is recorded, so that the real expression image simulating the real facial expression of the target user is obtained, that is, the target user simulation carrier synchronously displays the expression same as the face of the current target user, for example, the real facial expression of the current target user is in a mouth-corner-raised state, the expression simulated by the target user simulation carrier is also in a mouth-corner-raised state, that is, the real expression image is also in a mouth-corner-raised state; and the real facial expression of the current target user is ceaselessly blinking eyes, and the expression simulated by the target user simulation carrier is ceaselessly blinking eyes, namely the real expression image is ceaselessly blinking eyes.
In this embodiment, the target user simulation carrier may be a 3D simulation carrier, and the 3D simulation carrier may be a cartoon character simulation carrier, an animal simulation carrier, or the like, for example, as shown in fig. 4b, the target user simulation carrier is a cartoon doll simulation carrier displayed in a recording area, and the target user simulation carrier is within the scope of the embodiment of the present application as long as the privacy of the user is not violated. It should be noted that the target user simulation carrier may be selected from candidate simulation carriers displayed on the interface, may be selected from simulation carriers stored in the current device, and may also be a target user simulation carrier edited in the current user terminal.
After the recording is finished, a third trigger operation of the target user on the interface is obtained, the third trigger operation is used for stopping obtaining the real expression image of the target user, namely, the expression simulation image is generated according to the real expression image of the target user, and the method comprises the following steps: and generating an expression simulation image according to the acquired real expression image of the target user aiming at the third trigger operation. That is to say, after the real expression image of the target user is acquired, the real expression image of the target user can be stopped being acquired by acquiring the trigger operation on the image acquisition simulation block, and the expression simulation image is generated according to the real expression image of the target user. Specifically, the real expression image based on the target user is presented through the target user simulation carrier, so the real expression image is a real expression image in a non-standard format, and needs to be converted into an image in a standard format to be displayed on other terminal equipment. The specific processing method is described in detail in the above-mentioned processing for the moving image of the non-standard format, and will not be described repeatedly here. After the real expression image is processed, an expression simulation image in a standard format is generated. The standard format may be a GIF format. Finally, as shown in fig. 4c and 4d, the expression simulation image is uploaded by acquiring the operation of the recording area on the uploading virtual button, and then the expression simulation image is sent to the evaluation information display area of the target user by acquiring the sending operation on the interface, so that the generated expression simulation image can be displayed in the evaluation information display area of the target user, and the accuracy of submitting the evaluation information is further improved.
It should be noted that the real expression-based image is presented through a target user simulation carrier, so the expression simulation image includes the target user simulation carrier for simulating the target user, and the simulated expression features for simulating the real expression features of the target user are carried on the target user simulation carrier.
In this embodiment, before displaying the expression simulation image, the method further includes: adjusting display parameters of the expression simulation image according to the display requirements of the evaluation information display area of the target user, wherein the display parameters at least comprise one of the following parameters: the size of the expression simulation image, the format of the expression simulation image, the color value of the expression simulation image, and the resolution of the expression simulation image.
In this embodiment, the user terminal may further obtain voice evaluation information of the target user for the target object, where the voice evaluation information is simulated voice evaluation information, and specifically, obtain real voice evaluation information of the target user for the target object, and generate simulated voice evaluation information according to the real voice evaluation information of the target user. The simulated voice evaluation information is used for simulating real voice evaluation information of a target user aiming at a target object, wherein the simulated voice evaluation information can be input through a target user simulation carrier when a user terminal generates an expression simulation image, namely the target user sends out the real voice evaluation information, and the expression of the target user simulation carrier in the simulated target user is synchronously input. Of course, the simulated speech evaluation information may also be directly entered, and will not be described in detail here. And then, the voice evaluation information is played in the evaluation information display area, so that other users can intuitively hear the real voice evaluation information of the target user aiming at the target object. Furthermore, the voice evaluation information is converted into character evaluation information, the character evaluation information is displayed in the evaluation information display area, and other users can visually see character evaluation of the target user for the target object through the character evaluation information. Due to the introduction of the voice evaluation information and the character evaluation information, the accuracy of the evaluation information submitted by the target user is improved, and meanwhile, the diversity of the evaluation information submitted by the target user and aiming at the target object can be increased.
In this embodiment, the user terminal may further send a request message for obtaining evaluation information for the target object to the second device, that is, for the current product or the merchant, the target user may also view evaluation information of other users for the target object, that is, obtain an expression simulation image for simulating a real expression of the target user, including: and acquiring the expression simulation image provided by the second equipment. Specifically, the user terminal obtains a fourth trigger operation for displaying the evaluation information for the target object, and sends a request message for obtaining the evaluation information for the target object to the second device for the fourth trigger operation. In this embodiment, the second device may be the first device, that is, the same server.
A first embodiment of the present application provides an evaluation information processing method, including: acquiring an expression simulation image for simulating the real expression of a target user; and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object. The expression simulation image of the first embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the first embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually expressed, and meanwhile, the accuracy of evaluation information is improved.
A second embodiment of the present application provides an evaluation information processing method, and an application subject of the method may be a server. Fig. 5 is a flowchart of an evaluation information processing method according to a second embodiment of the present application, and the method according to this embodiment is described in detail below with reference to fig. 5. The following description refers to embodiments for the purpose of illustrating the principles of the methods, and is not intended to be limiting in actual use.
As shown in fig. 5, the evaluation information processing method provided in the present embodiment includes the steps of:
in step S501, a request message sent by the third device for obtaining the evaluation information for the target object is obtained.
In this embodiment, the target object may be a product or a merchant that has been consumed by the user, the user who has consumed the target object may be defined as the target user, when another user wants to check the evaluation information of the product or the merchant, the evaluation information needs to be checked by a third device, the third device is a user terminal that is correspondingly used by the another user, and after the third device sends a request message for acquiring the evaluation information of the target object to the server, the server may obtain the request message.
Step S502, providing an expression simulation image corresponding to the evaluation information of the target object by the target user to the third device, wherein the expression simulation image is used for simulating the real expression of the target user.
The first embodiment of the present application has already described the obtaining of the expression simulation image in detail, that is, the server has stored therein evaluation information at least including the expression simulation image of the target user for the product or the merchant, so this step is not repeated in detail for the specific obtaining manner of the expression simulation image. In this step, after obtaining the request message sent by the third device for obtaining the evaluation information for the target object, the server may send the stored evaluation information with the expression simulation image of the target user for the product or the merchant to the third device, so that other users corresponding to the third device can obtain the evaluation information with the expression simulation image. The expression simulation image is used for simulating the real expression of the target user.
In this embodiment, the server may further obtain an expression simulation image provided by a fourth device, and the fourth device may be understood as a user terminal different from the user terminal corresponding to the target user. That is, the server may also obtain expression simulation images provided by other users than the target user. The obtaining of the expression simulation image based on the first embodiment of the present application has been described in detail, and only the user terminal is different, so repeated description is not repeated here.
Further, in this embodiment, the server may further obtain the text evaluation information of the target user for the target object, which is provided by the fourth device, so that the server may also provide the expression simulation image and the text evaluation information to the third device. Of course, the server may also obtain the voice evaluation information of the target user for the target object, which is provided by the fourth device, so that the server may also provide the expression simulation image and the voice evaluation information to the third device. The voice evaluation information is simulated voice evaluation information, and the simulated voice evaluation information is used for simulating real voice evaluation information of a target user aiming at a target object.
A second embodiment of the present application provides an evaluation information processing method, including: obtaining a request message sent by third equipment and used for obtaining evaluation information aiming at a target object; and providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third device, wherein the expression simulation image is used for simulating the real expression of the target user. According to the second embodiment of the application, the request message sent by the third device and used for obtaining the evaluation information aiming at the target object is obtained, the expression simulation image simulating the real expression of the target user can be used as the evaluation information and provided for the third device, and the real expression of the target user is really reflected after the target user consumes the target object, so that the simulated expression simulation image can accurately reflect the evaluation reality of the target user on the target object, and the accuracy of the evaluation information is improved.
The third embodiment of the present application also provides an evaluation information processing apparatus, since the apparatus embodiment is basically similar to the method embodiment, so that the description is simple, and the details of the related technical features can be found in the corresponding description of the method embodiment provided above, and the following description of the apparatus embodiment is only illustrative.
Referring to fig. 6, to understand the embodiment, fig. 6 is a block diagram of a unit of the apparatus provided in the embodiment, and as shown in fig. 6, the apparatus provided in the embodiment includes: an expression simulation image obtaining unit 601, configured to obtain an expression simulation image for simulating a real expression of a target user. An expression simulation image processing unit 602, configured to send the expression simulation image to a first device as an image corresponding to evaluation information of the target user for a target object, or display the expression simulation image in an evaluation information display area corresponding to the target user, where the evaluation information display area is used to display evaluation information of the target user for the target object. The expression simulation image of the third embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the third embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually expressed, and meanwhile, the accuracy of evaluation information is improved.
In the embodiments described above, an evaluation information processing method and an evaluation information processing apparatus are provided, and in addition, a fourth embodiment of the present application also provides an electronic device, which is relatively simple to describe because the embodiment of the electronic device is basically similar to the embodiment of the method, and the details of the related technical features can be found in the corresponding description of the embodiment of the method provided above, and the following description of the embodiment of the electronic device is only illustrative. The embodiment of the electronic equipment is as follows: please refer to fig. 7 for understanding the present embodiment, fig. 7 is a schematic view of an electronic device provided in the present embodiment. As shown in fig. 7, the electronic apparatus includes: a processor 701; a memory 702; a memory 702 for storing a program for data processing, which when read and executed by the processor performs the following operations: acquiring an expression simulation image for simulating the real expression of a target user; and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object. The expression simulation image of the fourth embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the fourth embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually expressed, and meanwhile, the accuracy of evaluation information is improved.
The fifth embodiment of the present application also provides an evaluation information processing apparatus, since the apparatus embodiment is basically similar to the method embodiment, so that the description is simple, and the details of the related technical features can be found in the corresponding description of the method embodiment provided above, and the following description of the apparatus embodiment is only illustrative.
Referring to fig. 8, to understand the embodiment, fig. 8 is a block diagram of a unit of the apparatus provided in the embodiment, and as shown in fig. 8, the apparatus provided in the embodiment includes: a request message acquiring unit 801 configured to acquire a request message issued by a third device to acquire evaluation information for a target object; an expression simulation image providing unit 802, configured to provide, to the third device, an expression simulation image corresponding to evaluation information of a target user for the target object, where the expression simulation image is used to simulate a real expression of the target user. According to the fifth embodiment of the application, the request message sent by the third device and used for obtaining the evaluation information aiming at the target object is obtained, the expression simulation image simulating the real expression of the target user can be used as the evaluation information and provided for the fifth device, and the real expression of the target user is really reflected after the target user consumes the target object, so that the simulated expression simulation image can accurately reflect the evaluation reality of the target user on the target object, and the accuracy of the evaluation information is improved.
In the embodiments described above, an evaluation information processing method and an evaluation information processing apparatus are provided, and in addition, a sixth embodiment of the present application also provides an electronic device, which is relatively simple to describe because the embodiment of the electronic device is basically similar to the embodiment of the method, and the details of the related technical features can be found in the corresponding description of the embodiment of the method provided above, and the following description of the embodiment of the electronic device is only illustrative. The embodiment of the electronic equipment is as follows: please refer to fig. 7 for understanding the present embodiment, fig. 7 is a schematic view of an electronic device provided in the present embodiment. As shown in fig. 7, the electronic apparatus includes: a processor 701; a memory 702; a memory 702 for storing a program for data processing, which when read and executed by the processor performs the following operations: obtaining a request message sent by third equipment and used for obtaining evaluation information aiming at a target object; and providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third device, wherein the expression simulation image is used for simulating the real expression of the target user. According to the sixth embodiment of the application, by acquiring the request message sent by the third device and used for acquiring the evaluation information for the target object, the expression simulation image for simulating the real expression of the target user can be provided to the third device as the evaluation information, and the real expression of the target user is really reflected after the target user consumes the target object, so that the simulated expression simulation image can accurately reflect the evaluation reality of the target user for the target object, and the accuracy of the evaluation information is improved.
In correspondence with the evaluation information processing method provided by the first embodiment, the seventh embodiment of the present application further provides a computer storage medium, since the computer storage medium embodiment is basically similar to the method embodiment, the description is relatively simple, and for relevant points, reference may be made to part of the description of the method embodiment, and the computer storage medium embodiment described below is only illustrative.
A seventh embodiment of the present application provides a computer storage medium storing a computer program that is executed by a processor to execute the evaluation information processing method described in the first embodiment. The expression simulation image of the seventh embodiment of the application simulates the real expression of the target user, and can express the real feeling of the target user after consuming the product or the merchant, and the seventh embodiment of the application evaluates the product or the merchant through the expression simulation image with the real expression of the target user, so that the real feeling of the target user is visually expressed, and meanwhile, the accuracy of evaluation information is improved.
In correspondence with the evaluation information processing method provided by the second embodiment, the eighth embodiment of the present application also provides a computer storage medium, since the computer storage medium embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiment, and the computer storage medium embodiment described below is only illustrative.
An eighth embodiment of the present application provides a computer storage medium storing a computer program that is executed by a processor to execute the evaluation information processing method described in the second embodiment. According to the eighth embodiment of the application, the request message sent by the third device and used for obtaining the evaluation information aiming at the target object is obtained, the expression simulation image simulating the real expression of the target user can be used as the evaluation information and provided for the third device, and the real expression of the target user is really reflected after the target user consumes the target object, so that the simulated expression simulation image can accurately reflect the evaluation reality of the target user on the target object, and the accuracy of the evaluation information is improved.
A ninth embodiment of the present application provides an image processing method, where an application body of the method may be a user terminal. Fig. 9 is a flowchart of an image processing method according to a ninth embodiment of the present application, and the method according to this embodiment is described in detail below with reference to fig. 9. The following description refers to embodiments for the purpose of illustrating the principles of the methods, and is not intended to be limiting in actual use.
As shown in fig. 9, the image processing method provided by the present embodiment includes the following steps:
step S901, a dynamic image simulating a real expression of a target user is obtained.
In this embodiment, the dynamic image simulating the real expression of the target user may be a video or an animation image. The format of the dynamic image simulating the real expression of the target user is a non-standard format which can be displayed only in a specific terminal device or a specific area designated by the terminal device. The user terminal or the server can generate the expression simulation image in the standard format only after processing the dynamic image simulating the real expression of the target user, so that the expression simulation image can be displayed on any terminal equipment.
Further, acquiring a dynamic image simulating the real expression of the target user specifically includes displaying, by the user terminal, an evaluation information display area corresponding to the product or the merchant, as shown in fig. 4a, and then entering an interface for acquiring the dynamic image simulating the real expression of the target user, as shown in fig. 4b, according to a trigger operation on the evaluation information display area. The user terminal obtains the facial expression of a target user through structured light, the structured light can be sent out by a camera, the target user can trigger an image obtaining simulation block in an interface, so that the user terminal obtains the trigger operation of the target user for obtaining the real facial expression of the target user, the structured light is projected to the face of the target user according to the trigger operation to obtain a facial structured light image, the phase information corresponding to a deformation position pixel in the facial structured light image is demodulated to convert the phase information into height information, and then a three-dimensional facial model of the target user corresponding to the facial structured light image is obtained according to the height information. And then, simulating and generating a dynamic image reflecting the real expression of the target user on a target user simulation carrier selected by the target user according to the three-dimensional facial model. The target user simulation carrier can be displayed in the information interaction application, so that a dynamic image simulating the real expression of a target user is obtained, and the method comprises the following steps: and acquiring a dynamic image which is displayed in the information interaction application and simulates the real expression of the target user. That is to say, the dynamic image simulating the real expression of the target user is not limited in the aspect of evaluating information, and can be a display interface in the information interaction application.
In this embodiment, the target user simulation carrier may be a 3D simulation carrier, and the 3D simulation carrier may be a cartoon character simulation carrier, an animal simulation carrier, a doll simulation carrier, or the like, for example, as shown in fig. 4b, the target user simulation carrier is a cartoon doll simulation carrier displayed in a recording area, and the target user simulation carrier is within the protection scope of the embodiment of the present application as long as the target user simulation carrier does not violate the privacy of the user. Of course, in other schemes of this embodiment, the target user simulation carrier may also be created by the target user, so that interactivity of the evaluation manner can be improved, and thus the target user increases the enthusiasm for evaluating the product or the merchant after consuming the product or the merchant, and further, other users can obtain more comprehensive evaluation information for the product or the merchant.
The method for generating the dynamic image reflecting the real expression of the target user by simulating the target user simulation carrier selected by the target user according to the three-dimensional facial model specifically comprises the following steps: extracting feature information in the face three-dimensional model, matching the feature information with a target user simulation carrier selected by a target user, so as to simulate the real expression of the target user on the target user simulation carrier, and recording the simulation process to generate a dynamic image of the real expression of the target user. That is, when the user terminal obtains the trigger operation on the image obtaining simulation block, the target user simulation carrier simulates the real facial expression of the target user according to the trigger operation, and at the same time, the simulation process of the target user simulation carrier located in the recording area is recorded, so as to obtain a dynamic image simulating the real facial expression of the target user, that is, the target user simulation carrier synchronously displays the facial expression same as the current facial expression of the target user, for example, the real facial expression of the current target user is in a mouth-corner-rising state, the facial expression simulated by the target user simulation carrier is in a mouth-corner-rising state, the real facial expression of the current target user is in a state of continuously blinking eyes, and the facial expression simulated by the target user simulation carrier is in a state of continuously blinking eyes. After the recording is finished, as shown in fig. 4c, the upload virtual button of the recording area may be directly triggered, and the sending operation for the interface may be acquired, so as to upload the dynamic image to the server or the evaluation information display area.
It should be noted that, after the dynamic image simulating the real expression of the target user is formed, the target user may also view the specific content of the dynamic image immediately, that is, the user terminal obtains the trigger operation for viewing the dynamic image, so as to play the dynamic image in the recording area, and if the target user is not satisfied with the dynamic image generated by the current recording, the dynamic image may be deleted and re-recorded. It should be further noted that, in this embodiment, the dynamic image of the target user simulation carrier simulating the real expression of the target user is limited by the recording parameter, where the recording parameter is specifically at least one of the following parameters: the capacity of the moving image, the duration of the moving image, and the scene of the moving image, etc. Specifically, in this embodiment, the upper limit of the capacity of the moving image may be set to 30M-50M, which not only ensures the definition of the moving image, but also facilitates the user terminal to transmit the moving image quickly. The upper limit range of the duration of the dynamic image can be 10s-15s, so that the dynamic image can be transmitted by the user terminal quickly, the target user can finish the evaluation operation of the target user on the consumed products or merchants in a short time, the cost of inputting evaluation information by the target user is reduced, and the evaluation experience of the target user is improved. The situation of the dynamic image simulated by the target user simulation carrier mainly refers to the health degree of the target user simulation carrier simulating the real facial expression of the target user, namely the situation of the simulated dynamic image needs to be healthy and active so as to transfer healthy and active energy to other users when the situation is displayed on other terminal equipment, and therefore the internet environment is purified.
Step S902, intercepting a static image of at least one frame from the dynamic image.
In the present embodiment, since the generated moving image is a moving image in a non-standard format, it is necessary to convert it into an image in a standard format for display on another terminal device. Specifically, one mode is to obtain an interception trigger operation for the dynamic image, and according to the interception trigger operation, intercept the static image of at least one frame from the dynamic image and capture the dynamic image to obtain the static image of at least one frame in the dynamic image. In another mode, after recording the dynamic image, the user terminal intercepts and collects the dynamic image according to a predetermined sampling rate, wherein the predetermined sampling rate of the embodiment is 60hz, so as to obtain a static image of at least one frame in the dynamic image corresponding to at least one moment.
It can be understood that, in this embodiment, after the complete dynamic image is captured according to the predetermined sampling rate, it may generally be captured into a plurality of frames of static images, because the capacity of the dynamic image is small, and the predetermined sampling rate of the dynamic image is high, the change of the captured static images of adjacent frames is relatively small, and then the captured and recorded static image of the first frame is used as a full data image, and then the static image of each frame is differentiated from the static image of the previous frame, so that the data amount of generating the expression simulation image may be reduced, and uploading is facilitated. Specifically, if the static image of the current frame is different from the static image of the previous frame, the static image of the current frame is recorded; if the static image of the current frame is the same as the static image of the previous frame, judging whether the static image of the next frame is the same as the static image of the current frame, and if not, recording the static image of the next frame; if the static images are the same, whether the static image of the next frame is the same as the static image of the next frame is judged, and the like, so that the continuous static images of a plurality of frames are recorded. Therefore, the smaller the data volume of the dynamic image for simulating the real expression of the target user, the better the data volume is, so as to ensure that the dynamic image is uploaded quickly.
Of course, the step of cutting out the static image of at least one frame from the dynamic image may further include: intercepting a static image of a first target frame from a dynamic image; selecting a static image of a second target frame located after the first target frame from the dynamic image; carrying out difference processing on the static image of the second target frame and the static image of the first target frame to obtain first difference data between the static image of the second target frame and the static image of the first target frame; selecting a static image of a third target frame located after the second target frame from the dynamic image; carrying out difference processing on the static image of the third target frame and the static image of the second target frame to obtain second difference data between the static image of the third target frame and the static image of the second target frame; and in the same way, obtaining difference data between the static images of all the adjacent target frames.
Step S903, generating an expression simulation image according to the static image of the at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user.
In this embodiment, if the dynamic image for acquiring the real expression of the simulation target user is an unchanged dynamic image, the expression simulation image may be generated from a static image of one frame; if the dynamic image for simulating the real expression of the target user is a changed dynamic image, the static image of one frame may be used to generate the expression simulation image, or the static images of a plurality of consecutive frames may be used to generate the expression simulation image, which is not limited in this embodiment. Or generating an expression simulation image according to the static image of the first target frame and the difference data between the static images of all the adjacent target frames. The generated expression simulation image is an image in a standard format, and the standard format can be a GIF format, so that the generated expression simulation image can be displayed on different terminal devices, and the expression simulation image is displayed in the evaluation information display area.
A ninth embodiment of the present application provides an image processing method, including: acquiring a dynamic image simulating the real expression of a target user; intercepting a static image of at least one frame from a dynamic image; and generating an expression simulation image according to the static image of at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user. The dynamic image of the ninth embodiment of the present application simulates a real expression of a target user, and an expression simulation image is generated by capturing at least one frame of static image from the dynamic image, so that the expression simulation image capable of simulating the real expression of the target user is displayed on different terminal devices.
The ninth embodiment provides an image processing method, and correspondingly, the tenth embodiment of the present application also provides an image processing apparatus, since the apparatus embodiment is basically similar to the method embodiment, so that the description is relatively simple, and the details of the related technical features can be found in the corresponding description of the method embodiment provided above, and the following description of the apparatus embodiment is only illustrative. Please refer to fig. 10 for understanding the embodiment, fig. 10 is a block diagram of a unit of the apparatus provided in the embodiment, and as shown in fig. 10, the apparatus provided in the embodiment includes: an acquiring unit 1001 configured to acquire a dynamic image simulating a real expression of a target user; an intercepting unit 1002 for intercepting a still image of at least one frame from a dynamic image; the generating unit 1003 is configured to generate an expression simulation image according to the static image of the at least one frame, where the expression simulation image is used to simulate a real expression of the target user. The dynamic image of the tenth embodiment of the present application simulates a real expression of a target user, and an expression simulation image is generated by capturing at least one frame of a static image from the dynamic image, so that the expression simulation image capable of simulating the real expression of the target user is displayed on different terminal devices.
In the embodiments described above, an image processing method and an image processing apparatus are provided, and in addition, an electronic device is provided in an eleventh embodiment of the present application, since embodiments of the electronic device are substantially similar to embodiments of the method, so that description is relatively simple, and details of relevant technical features may be found in corresponding descriptions of the embodiments of the method provided above, and the following descriptions of the embodiments of the electronic device are merely illustrative. The embodiment of the electronic equipment is as follows: please refer to fig. 7 for understanding the present embodiment, fig. 7 is a schematic view of an electronic device provided in the present embodiment. As shown in fig. 7, the electronic apparatus includes: a processor 701; a memory 702; a memory 702 for storing a program for data processing, which when read and executed by the processor performs the following operations: acquiring a dynamic image simulating the real expression of a target user; intercepting a static image of at least one frame from a dynamic image; and generating an expression simulation image according to the static image of at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user. The dynamic image of the eleventh embodiment of the present application is a static image that simulates a real expression of a target user, and an expression simulation image is generated by capturing at least one frame of the static image from the dynamic image, so that the expression simulation image that can simulate the real expression of the target user is displayed on different terminal devices.
In correspondence with the image processing method provided by the ninth embodiment, the twelfth embodiment of the present application further provides a computer storage medium, since the computer storage medium embodiment is substantially similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiment, and the computer storage medium embodiment described below is only illustrative. A twelfth embodiment of the present application provides a computer storage medium storing a computer program that is executed by a processor to execute the evaluation information processing method described in the ninth embodiment. The dynamic image of the twelfth embodiment of the present application simulates a real expression of a target user, and an expression simulation image is generated by capturing at least one frame of static image from the dynamic image, so that the expression simulation image capable of simulating the real expression of the target user is displayed on different terminal devices.
Although the preferred embodiments of the present invention have been disclosed in the foregoing description, the preferred embodiments of the present invention are not limited thereto, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the embodiments of the present invention. In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (10)

1. An evaluation information processing method characterized by comprising:
acquiring an expression simulation image for simulating the real expression of a target user;
and sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
2. An evaluation information processing method characterized by comprising:
obtaining a request message sent by third equipment and used for obtaining evaluation information aiming at a target object;
and providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third device, wherein the expression simulation image is used for simulating the real expression of the target user.
3. An evaluation information processing apparatus, comprising:
the expression simulation image acquisition unit is used for acquiring an expression simulation image used for simulating the real expression of the target user;
and the expression simulation image processing unit is used for sending the expression simulation image to first equipment as an image corresponding to the evaluation information of the target user for the target object, or displaying the expression simulation image in an evaluation information display area corresponding to the target user, wherein the evaluation information display area is used for displaying the evaluation information of the target user for the target object.
4. An evaluation information processing apparatus, comprising:
a request message acquiring unit configured to acquire a request message sent by a third device to acquire evaluation information for a target object;
and the expression simulation image providing unit is used for providing an expression simulation image corresponding to the evaluation information of the target user for the target object to the third equipment, and the expression simulation image is used for simulating the real expression of the target user.
5. An electronic device, comprising:
a processor;
a memory for storing a computer program which, when read and executed by the processor, executes the evaluation information processing method according to any one of claims 1 to 2.
6. A computer storage medium storing a computer program that is executed by a processor and executes the evaluation information processing method according to any one of claims 1 to 2.
7. An image processing method, comprising:
acquiring a dynamic image simulating the real expression of a target user;
intercepting a static image of at least one frame from the dynamic image;
and generating an expression simulation image according to the static image of the at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user.
8. An image processing apparatus characterized by comprising:
the acquisition unit is used for acquiring a dynamic image simulating the real expression of a target user;
an intercepting unit configured to intercept a still image of at least one frame from the dynamic image;
and the generating unit is used for generating an expression simulation image according to the static image of the at least one frame, wherein the expression simulation image is used for simulating the real expression of the target user.
9. An electronic device, comprising:
a processor;
a memory for storing a computer program which, when read and executed by the processor, performs the image processing method of claim 7.
10. A computer storage medium, in which a computer program is stored, the computer program being executed by a processor to perform the image processing method of claim 7.
CN201911094237.XA 2019-11-11 2019-11-11 Evaluation information processing method and device, electronic device and image processing method Pending CN111104854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911094237.XA CN111104854A (en) 2019-11-11 2019-11-11 Evaluation information processing method and device, electronic device and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911094237.XA CN111104854A (en) 2019-11-11 2019-11-11 Evaluation information processing method and device, electronic device and image processing method

Publications (1)

Publication Number Publication Date
CN111104854A true CN111104854A (en) 2020-05-05

Family

ID=70420489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911094237.XA Pending CN111104854A (en) 2019-11-11 2019-11-11 Evaluation information processing method and device, electronic device and image processing method

Country Status (1)

Country Link
CN (1) CN111104854A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739709A (en) * 2009-12-24 2010-06-16 四川大学 Control method of three-dimensional facial animation
CN103854197A (en) * 2012-11-28 2014-06-11 纽海信息技术(上海)有限公司 Multimedia comment system and method for the same
CN107277643A (en) * 2017-07-31 2017-10-20 合网络技术(北京)有限公司 The sending method and client of barrage content
CN107563362A (en) * 2017-10-01 2018-01-09 上海量科电子科技有限公司 Evaluate method, client and the system of operation
CN107767205A (en) * 2016-08-23 2018-03-06 阿里巴巴集团控股有限公司 Display systems, method, client and the processing method of evaluation information, server
CN108776903A (en) * 2018-05-16 2018-11-09 浙江口碑网络技术有限公司 User's evaluation method and device based on interactive form
CN108880975A (en) * 2017-05-16 2018-11-23 腾讯科技(深圳)有限公司 Information display method, apparatus and system
CN109003113A (en) * 2018-05-30 2018-12-14 浙江口碑网络技术有限公司 Evaluate the method and device of data processing and displaying, electronic equipment and storage equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739709A (en) * 2009-12-24 2010-06-16 四川大学 Control method of three-dimensional facial animation
CN103854197A (en) * 2012-11-28 2014-06-11 纽海信息技术(上海)有限公司 Multimedia comment system and method for the same
CN107767205A (en) * 2016-08-23 2018-03-06 阿里巴巴集团控股有限公司 Display systems, method, client and the processing method of evaluation information, server
CN108880975A (en) * 2017-05-16 2018-11-23 腾讯科技(深圳)有限公司 Information display method, apparatus and system
CN107277643A (en) * 2017-07-31 2017-10-20 合网络技术(北京)有限公司 The sending method and client of barrage content
CN107563362A (en) * 2017-10-01 2018-01-09 上海量科电子科技有限公司 Evaluate method, client and the system of operation
CN108776903A (en) * 2018-05-16 2018-11-09 浙江口碑网络技术有限公司 User's evaluation method and device based on interactive form
CN109003113A (en) * 2018-05-30 2018-12-14 浙江口碑网络技术有限公司 Evaluate the method and device of data processing and displaying, electronic equipment and storage equipment

Similar Documents

Publication Publication Date Title
CN110458918B (en) Method and device for outputting information
WO2018228037A1 (en) Media data processing method and device and storage medium
CA2873308C (en) Rotatable object system for visual communication and analysis
US20090044112A1 (en) Animated Digital Assistant
CN114025219B (en) Rendering method, device, medium and equipment for augmented reality special effects
US20210166461A1 (en) Avatar animation
WO2018076939A1 (en) Video file processing method and apparatus
US20160035016A1 (en) Method for experiencing multi-dimensional content in a virtual reality environment
CN113542624A (en) Method and device for generating commodity object explanation video
CN109271929B (en) Detection method and device
CN114245228A (en) Page link releasing method and device and electronic equipment
CN111583348B (en) Image data encoding method and device, image data displaying method and device and electronic equipment
Zerman et al. User behaviour analysis of volumetric video in augmented reality
CN114463470A (en) Virtual space browsing method and device, electronic equipment and readable storage medium
CN114222076B (en) Face changing video generation method, device, equipment and storage medium
Giunta et al. Investigating the impact of spatial augmented reality on communication between design session participants-a pilot study
KR20160069663A (en) System And Method For Producing Education Cotent, And Service Server, Manager Apparatus And Client Apparatus using therefor
CN111104854A (en) Evaluation information processing method and device, electronic device and image processing method
CN113419798B (en) Content display method, device, equipment and storage medium
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
KR101568295B1 (en) Information output method of augmented reality
CN109923540A (en) The gesture and/or sound for modifying animation are recorded in real time
JP2012124745A (en) Display data providing server
CN110662099B (en) Method and device for displaying bullet screen
CN114758374A (en) Expression generation method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505

RJ01 Rejection of invention patent application after publication