CN108391141B - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN108391141B
CN108391141B CN201810226177.1A CN201810226177A CN108391141B CN 108391141 B CN108391141 B CN 108391141B CN 201810226177 A CN201810226177 A CN 201810226177A CN 108391141 B CN108391141 B CN 108391141B
Authority
CN
China
Prior art keywords
file
preset
preset voice
user
voice file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810226177.1A
Other languages
Chinese (zh)
Other versions
CN108391141A (en
Inventor
谢俊
莫玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Jingdong Technology Holding Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN201810226177.1A priority Critical patent/CN108391141B/en
Publication of CN108391141A publication Critical patent/CN108391141A/en
Application granted granted Critical
Publication of CN108391141B publication Critical patent/CN108391141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the method comprises: responding to an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file generation step, wherein the operation request comprises: playing a current preset voice file to obtain video data and audio data; coding the video data and the audio data to generate a video file and an audio file; in response to the fact that the audio data comprise voice data input by a user and the playing of a preset voice file in a preset voice file set is completed, merging the video file and the audio file to generate a target multimedia file; and sending the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file. The embodiment improves the efficiency of authenticating the user.

Description

Method and apparatus for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
When a user wants to perform some operation related to personal privacy, for example, purchase some product (financial product) or transact some business (e.g., open an account), the user's identity is often required to be verified.
The existing method for verifying the identity of a user is to perform the on-the-spot verification on the user, that is, the user needs to go to a specified place to complete the identity verification.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, where the method includes: responding to an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file generating step, wherein the multimedia file generating step comprises the following steps: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; the method further comprises the following steps: in response to the fact that the audio data comprise voice data input by a user aiming at the current preset voice file and the preset voice file in the preset voice file set is played completely, combining the video file and the audio file to generate a target multimedia file; and sending the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file.
In some embodiments, the authentication reference further comprises a target image, the target image being generated based on the steps of: displaying product information preset for a target product; in response to receiving an information input request of a user for product information, acquiring text information input by the user, wherein the text information comprises a signature of the user; and adding the acquired text information to a preset image to generate a target image, wherein the preset image contains product information.
In some embodiments, the preset image includes a preset image area, wherein the preset image area is different from an image area of the product information on the preset image; and adding the acquired text information to a preset image, including: and adding the acquired text information into a preset image area.
In some embodiments, playing the current preset voice file includes: and playing the current preset voice file, and displaying preset text information aiming at the played preset voice file.
In some embodiments, the method further comprises: determining whether the audio data does not include voice data input by a user for a current preset voice file; and in response to determining that the audio data does not include voice data input by the user for the current preset voice file, performing a multimedia file generating step.
In some embodiments, after determining whether the audio data does not include voice data input by the user for the current preset voice file, the method further includes: in response to determining that the audio data comprises voice data input by a user for a current preset voice file, determining whether playing of the preset voice file in the voice file set is completed; and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing a multimedia file generating step.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: the first execution unit is configured to respond to an operation request of a user for a target product, shoot the user, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file generation step, wherein the multimedia file generation step comprises: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; the device also includes: the merging unit is configured to merge the video file and the audio file to generate a target multimedia file in response to the fact that the audio data comprise voice data input by a user aiming at a current preset voice file and the preset voice file in the preset voice file set is played completely; and the sending unit is configured to send the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file.
In some embodiments, the authentication reference further comprises a target image, the target image being generated based on the steps of: displaying product information preset for a target product; in response to receiving an information input request of a user for product information, acquiring text information input by the user, wherein the text information comprises a signature of the user; and adding the acquired text information to a preset image to generate a target image, wherein the preset image contains product information.
In some embodiments, the preset image includes a preset image area, wherein the preset image area is different from an image area of the product information on the preset image; and the adding unit is further configured to add the acquired text information to a preset image area.
In some embodiments, playing the current preset voice file includes: and playing the current preset voice file, and displaying preset text information aiming at the played preset voice file.
In some embodiments, the apparatus further comprises: a determining unit configured to determine whether the audio data does not include voice data input by a user for a current preset voice file; and a second execution unit configured to execute the multimedia file generation step in response to a determination that the audio data does not include voice data input by a user for a current preset voice file.
In some embodiments, the second execution unit is further configured to: in response to determining that the audio data comprises voice data input by a user for a current preset voice file, determining whether playing of the preset voice file in the voice file set is completed; and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing a multimedia file generating step.
In a third aspect, an embodiment of the present application provides a terminal, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for outputting information described above.
In a fourth aspect, the present application provides a computer storage medium having a computer program stored thereon, where the program is executed by a processor to implement the method of any one of the above-mentioned methods for outputting information.
According to the method and the device for outputting the information, provided by the embodiment of the application, the user is shot by responding to the received operation request of the user for the target product, the unplayed preset voice file is selected from the preset voice file set to serve as the current preset voice file, and the multimedia file generation step is executed, wherein the multimedia file generation step comprises the following steps: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; then, in response to the fact that the audio data comprise voice data input by a user aiming at the current preset voice file and the preset voice file in the preset voice file set is played, combining the video file and the audio file to generate a target multimedia file; and finally, sending the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file, so that the target multimedia file generated by shooting is used for authenticating the user, the in-place authentication of the user is avoided, and the efficiency of authenticating the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, in accordance with the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for outputting information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for outputting information according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing a terminal device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a web browser application, a search-type application, an instant messaging tool, video processing software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with cameras, speakers and microphones, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., for providing distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as an information processing server that processes multimedia files transmitted by the terminal apparatuses 101, 102, 103. The information processing server may analyze and otherwise process the received data, such as the multimedia file, and feed back a processing result (e.g., an authentication result) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules for providing distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for outputting information provided in the embodiments of the present application is generally performed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for outputting information is generally disposed in the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present application is shown. The method for outputting information comprises the following steps:
step 201, in response to receiving an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file generation step.
In this embodiment, an execution subject (e.g., terminal devices 101, 102, 103 shown in fig. 1) on which the method for outputting information is executed may photograph a user in response to receiving an operation request of the user for a target product, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file generation step. The target product may be a product that a user wants to operate, and specifically, the target product may be a virtual product (e.g., a financial product) or an entity product (e.g., a bank card). The operation request may be various requests (e.g., purchase request, investment request, transaction request, etc.) initiated by the user and related to the target product. The preset voice file set corresponds to the target product, the preset voice file set may include at least one preset voice file, and the preset voice file may include a voice for authentication, which is pre-recorded for the target product. For example, pre-recorded speech for questioning the user for the target product may be included. It should be noted that the preset voice file set may include an unplayed preset voice file. Here, the unplayed flag may be preset for the unplayed preset voice file, and the execution main body may select the unplayed preset voice file from the preset voice file set as the current preset voice file. Or, the execution main body may determine and select the unplayed preset voice file by querying the play record, and then use the selected preset voice file to be played as the current preset voice file.
In this embodiment, the multimedia file generating step may include:
in step 2011, the current preset voice file is played to obtain video data and audio data for the played preset voice file.
The video data may be video obtained by shooting a user, and the audio data may be audio obtained by shooting a user. Here, the audio data may include a voice response made by the user after listening to the played preset voice file, for example, a response to a question provided in the preset voice file.
In some optional implementation manners of this embodiment, the execution main body may play a current preset voice file, and display text information preset for the played preset voice file. The preset text information may be prompt information preset by a technician for the played preset voice file. It can be understood that, by displaying the text information, the user can conveniently acquire the information and further make a voice response.
Step 2012, the obtained video data and audio data are encoded respectively to generate a video file and an audio file.
The video file may be a video conforming to a preset video format. The audio file may be audio conforming to a preset audio format. Here, the video in the preset video format and the audio in the preset audio format may be combined into a multimedia file.
For example, the execution subject may encode the obtained video data according to an H264 protocol to generate a video file in an H264 format; and coding the obtained audio file according to an ACC protocol to generate the audio file in the ACC format. Among them, the video file in the H264 format and the audio file in the ACC format may be combined into a multimedia file.
It should be noted that video coding and audio coding are well-known technologies that are widely researched and applied at present, and are not described herein again.
Step 202, in response to determining that the audio data includes voice data input by the user for the current preset voice file and that the preset voice file in the preset voice file set is played, merging the video file and the audio file to generate a target multimedia file.
In this embodiment, the executing entity (for example, the terminal devices 101, 102, and 103 shown in fig. 1) may combine the video file and the audio file to generate the target multimedia file in response to determining that the audio data includes voice data input by a user for a current preset voice file and that the preset voice file in the preset voice file set is played completely. And the target multimedia file is a multimedia file to be output and used for authenticating the user.
It can be understood that, as a condition for generating the target multimedia file by merging the video file and the audio file, the executing entity needs to determine whether the audio data includes the voice data input by the user for the current preset voice file, and whether the playing of the preset voice file in the preset voice file set is completed.
In the present embodiment, the execution main body may determine whether the audio data includes voice data input by the user for the current preset voice file through various methods. As an example, when the environmental noise influence degree is small, the execution main body may determine whether the audio data includes the voice data input by the user by comparing a sound signal of a preset voice file and a sound signal of the audio data. Specifically, if the sound signal of the audio data is the same as or similar to the sound signal of the preset voice file, it may be determined that the audio data does not include the voice data input by the user.
Alternatively, the execution main body may recognize the audio data as text data, and determine whether the audio data includes voice data input by the user based on the recognized text data. As an example, the audio data includes a voice in a preset voice file, and a preset text is previously recognized for the voice in the preset voice file. After the audio data is recognized as the text data, the executing body may determine whether the recognized text data includes a text other than the preset text, and if so, may determine that the audio data includes voice data input by the user. In addition, it is understood that when the audio data does not include the voice in the preset voice file, it may be determined whether the audio data includes the voice data input by the user only by determining whether the text data is recognized through the audio data (specifically, if the text data is recognized, it may be determined that the audio data includes the voice data input by the user).
It should be noted that speech recognition is a well-known technology widely studied and applied at present, and is not described herein again.
In this embodiment, the execution main body may determine whether the playing of the preset voice file in the preset voice file set is completed through various methods. As an example, the execution subject may determine whether the playing of the preset voice file is completed by determining whether the preset voice file set further includes a preset voice file with an unplayed identifier; or, the execution body may determine whether the preset voice file is played completely by searching for a play record.
Here, the execution body may combine the video file and the audio file by various methods to generate the target multimedia file, which is not limited herein. As an example, the execution agent may merge the video file and the audio file using pre-installed software (e.g., FFmpeg) to generate the target multimedia file. It should be noted that the video and audio synthesizing technology is a well-known technology widely studied and applied at present, and is not described herein again.
In some optional implementation manners of this embodiment, the executing body may further determine whether the audio data does not include voice data input by a user for a current preset voice file; and in response to determining that the audio data does not include voice data input by the user for the current preset voice file, performing a multimedia file generating step.
In some optional implementation manners of this embodiment, after determining whether the audio data does not include the voice data input by the user for the current preset voice file, the executing body may further perform the following steps: in response to determining that the audio data comprises voice data input by a user for a current preset voice file, determining whether playing of the preset voice file in the voice file set is completed; and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing a multimedia file generating step.
Step 203, sending the authentication reference file to the server, so that the server authenticates the user based on the authentication reference file.
In this embodiment, based on the target multimedia file obtained in step 202, the executing entity may send an authentication reference file to a server (e.g., the server 105 shown in fig. 1) to enable the server to authenticate the user based on the authentication reference file, where the authentication reference file may include the target multimedia file. Specifically, as an example, the server may play the target multimedia file for an auditor to audit, and obtain an authentication result input by the auditor.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 may, in response to receiving an operation request (transaction request) 303 of the user 302 for a target product (bank card), shoot the user 302, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and perform a voice data receiving multimedia file generating step. Specifically, a current preset voice file may be played and the user 302 may be photographed to obtain video data 305 and audio data 306 for the preset voice file; the obtained video data 305 and audio data 306 are encoded, respectively, to generate a video file 307 and an audio file 308. Then, the terminal device 301 may combine the video file 307 and the audio file 308 to generate the target multimedia file 309 in response to determining that the audio data 306 includes voice data input by the user 302 for the current preset voice file and that the preset voice file in the preset voice file set is played completely. Finally, the terminal device 301 may send an authentication reference 311 to the server 310 to enable the server to authenticate the user based on the authentication reference 311, wherein the authentication reference 311 includes the target media file 309.
The method provided by the above embodiment of the present application photographs a user by responding to a received operation request of the user for a target product, selects a preset voice file that is not played from a preset voice file set as a current preset voice file, and executes a multimedia file generating step, where the multimedia file generating step includes: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; then, in response to the fact that the audio data comprise voice data input by a user aiming at the current preset voice file and the preset voice file in the preset voice file set is played, combining the video file and the audio file to generate a target multimedia file; and finally, sending the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file, so that the target multimedia file generated by shooting is used for authenticating the user, the in-place authentication of the user is avoided, and the efficiency of authenticating the user is improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for outputting information is shown. The process 400 of the method for outputting information includes the steps of:
step 401, in response to receiving an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file generation step.
In this embodiment, an execution subject (e.g., terminal devices 101, 102, 103 shown in fig. 1) on which the method for outputting information is executed may photograph a user in response to receiving an operation request of the user for a target product, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file generation step.
Step 402, in response to determining that the audio data includes voice data input by a user for a current preset voice file and that playing of a preset voice file in a preset voice file set is completed, merging the video file and the audio file to generate a target multimedia file.
In this embodiment, the executing entity (for example, the terminal devices 101, 102, and 103 shown in fig. 1) may combine the video file and the audio file to generate the target multimedia file in response to determining that the audio data includes voice data input by a user for a current preset voice file and that the preset voice file in the preset voice file set is played completely.
The steps 401 and 402 are respectively the same as the steps 201 and 202 in the foregoing embodiment, and the above description for the steps 201 and 202 also applies to the steps 401 and 402, which is not described herein again.
And step 403, displaying preset product information for the target product.
In the present embodiment, an execution body on which a method for outputting information is executed may display product information preset for a target product. Wherein the displayed product information may be available for viewing by a user. Product information may be used to characterize attributes of a target product, and may include, but is not limited to, at least one of: characters, numbers, symbols, images. Here, the execution main body may display product information preset for the target product in various forms such as a web page, a picture, and the like.
Step 404, in response to receiving an information input request of the user for the product information, obtaining the text information input by the user.
In this embodiment, the executing entity may obtain the text information input by the user in response to receiving an information input request for the product information from the user, where the text information may include a signature of the user. Specifically, the execution main body may store an image for receiving a text input by a user in advance, and further, the execution main body may acquire text information on the image.
And 405, adding the acquired text information to a preset image to generate a target image.
In this embodiment, based on the text information in step 404, the executing entity may add the acquired text information to a preset image to generate a target image, where the preset image may include the product information. The target image may be an image to be output for a related-art person to review.
Here, the execution main body may add the acquired text information to the preset image through various methods. As an example, the execution subject may identify and obtain an image area containing the text information, capture a screenshot of the image area, and add the image obtained by capturing the screenshot to a preset image through an image fusion technique.
Optionally, the execution main body may identify pixel points included in the text information, generate pixel points having the same pixel values as the identified pixel points (i.e., generate the text information) on the preset blank image, obtain an image to be added including the text information, and add the image to be added to the preset image through an image fusion technique.
It should be noted that image fusion is a well-known technique widely studied and applied at present, and is not described herein again.
In some optional implementations of this embodiment, the preset image may include a preset image region, where the preset image region is different from an image region of the product information on the preset image; and the execution body may add the acquired text information to the preset image area.
Step 406, sending the authentication reference file to the server, so that the server authenticates the user based on the authentication reference file.
In this embodiment, based on the target multimedia file obtained in step 402 and the target image obtained in step 405, the executing entity may send an authentication reference file to a server (e.g., the server 105 shown in fig. 1) to enable the server to authenticate the user based on the authentication reference file, where the authentication reference file may include the target multimedia file and the target image.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for outputting information in this embodiment highlights the steps of acquiring the text information input by the user, further generating the target image, and outputting the target image as the authentication reference file. Therefore, the scheme described by the embodiment can introduce more data associated with the operation request sent by the user, thereby realizing more comprehensive information output.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a first execution unit 501, a merging unit 502 and a sending unit 503. The first executing unit 501 is configured to, in response to receiving an operation request of a user for a target product, shoot the user, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file generating step, where the multimedia file generating step includes: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; a merging unit 502 configured to merge the video file and the audio file to generate a target multimedia file in response to determining that the audio data includes voice data input by a user for a current preset voice file and that playing of a preset voice file in a preset voice file set is completed; the sending unit 503 is configured to send an authentication reference file to the server, so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file may include the target multimedia file.
In this embodiment, the first execution unit 501 of the apparatus 500 for outputting information may shoot a user in response to receiving an operation request of the user for a target product, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file generation step. The target product may be a product that a user wants to operate, and specifically, the target product may be a virtual product (e.g., a financial product) or an entity product (e.g., a bank card). The operation request may be various requests (e.g., purchase request, investment request, transaction request, etc.) initiated by the user and related to the target product. The preset voice file set corresponds to the target product, the preset voice file set may include at least one preset voice file, and the preset voice file may include a voice for authentication, which is pre-recorded for the target product. It should be noted that the preset voice file set may include an unplayed preset voice file. Here, the unplayed flag may be preset for the unplayed preset voice file, and the execution main body may select the unplayed preset voice file from the preset voice file set as the current preset voice file. Or, the execution main body may determine and select the unplayed preset voice file by querying the play record, and then use the selected preset voice file to be played as the current preset voice file.
In this embodiment, the multimedia file generating step may include:
in step 5011, the current preset voice file is played to obtain video data and audio data for the played preset voice file.
The video data may be video obtained by shooting a user, and the audio data may be audio obtained by shooting a user. Here, the audio data may include a voice response made by the user after listening to the played preset voice file, for example, a response to a question provided in the preset voice file.
In step 5012, the obtained video data and audio data are encoded, respectively, to generate a video file and an audio file.
The video file may be a video conforming to a preset video format. The audio file may be audio conforming to a preset audio format. Here, the video in the preset video format and the audio in the preset audio format may be combined into a multimedia file.
It should be noted that video coding and audio coding are well-known technologies that are widely researched and applied at present, and are not described herein again.
In this embodiment, the merging unit 502 of the apparatus for outputting information 500 may merge the video file and the audio file to generate the target multimedia file in response to determining that the audio data includes voice data input by a user for a current preset voice file and that playing of the preset voice file in the preset voice file set is completed. And the target multimedia file is a multimedia file to be output and used for authenticating the user.
It can be understood that as a condition for generating the target multimedia file by combining the video file and the audio file, it is required to determine whether the audio data includes voice data input by a user for a current preset voice file, and whether playing of a preset voice file in a preset voice file set is completed.
In the present embodiment, the merging unit 502 may determine whether the audio data includes voice data input by the user for the current preset voice file through various methods. As an example, when the environmental noise influence degree is small, the execution main body may determine whether the audio data includes the voice data input by the user by comparing a sound signal of a preset voice file and a sound signal of the audio data. Specifically, if the sound signal of the audio data is the same as or similar to the sound signal of the preset voice file, it may be determined that the audio data does not include the voice data input by the user.
Alternatively, the merging unit 502 may recognize the audio data as text data and determine whether the audio data includes voice data input by the user based on the recognized text data. As an example, the audio data includes a voice in a preset voice file, and a preset text is previously recognized for the voice in the preset voice file. After the audio data is recognized as the text data, the executing body may determine whether the recognized text data includes a text other than the preset text, and if so, may determine that the audio data includes voice data input by the user. In addition, it is understood that when the audio data does not include the voice in the preset voice file, it may be determined whether the audio data includes the voice data input by the user only by determining whether the text data is recognized through the audio data (specifically, if the text data is recognized, it may be determined that the audio data includes the voice data input by the user).
It should be noted that speech recognition is a well-known technology widely studied and applied at present, and is not described herein again.
In this embodiment, the merging unit 502 may determine whether the playing of the preset voice file in the preset voice file set is completed through various methods. As an example, the merging unit 502 may determine whether the playing of the preset voice file is completed by determining whether the preset voice file with the unplayed identifier is further included in the preset voice file set; alternatively, the merging unit 502 may determine whether the preset voice file is played completely by searching for the play record.
Here, the merging unit 502 may merge the video file and the audio file into the target multimedia file by various methods, which are not limited herein. It should be noted that the video and audio synthesizing technology is a well-known technology widely studied and applied at present, and is not described herein again.
In this embodiment, based on the target multimedia file obtained by the merging unit 502, the sending unit 503 may send an authentication reference file to a server (e.g., the server 105 shown in fig. 1) to enable the server to authenticate the user based on the authentication reference file, where the authentication reference file may include the target multimedia file.
In some optional implementations of this embodiment, the authentication reference file may further include a target image, and the target image may be generated based on the following steps: displaying product information preset for a target product; in response to receiving an information input request of a user for product information, acquiring text information input by the user, wherein the text information comprises a signature of the user; and adding the acquired text information to a preset image to generate a target image, wherein the preset image contains product information.
In some optional implementations of this embodiment, the preset image may include a preset image region, where the preset image region is different from an image region of the product information on the preset image; and the adding unit may be further configured to add the acquired text information to a preset image area.
In some optional implementation manners of this embodiment, playing the current preset voice file includes: and playing the current preset voice file, and displaying preset text information aiming at the played preset voice file.
In some optional implementations of this embodiment, the apparatus 500 for outputting information may further include: a determining unit (not shown in the drawings) configured to determine whether the audio data does not include voice data input by a user for a current preset voice file; and a second execution unit (not shown in the figure) configured to execute the multimedia file generation step in response to a determination that the audio data does not include voice data input by the user for the current preset voice file.
In some optional implementations of this embodiment, the second execution unit may be further configured to: in response to determining that the audio data comprises voice data input by a user for a current preset voice file, determining whether playing of the preset voice file in the voice file set is completed; and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing a multimedia file generating step.
The apparatus 500 for outputting information provided in the foregoing embodiment of the present application, through the first execution unit 501, in response to receiving an operation request of a user for a target product, shoots the user, selects a non-played preset voice file from a preset voice file set as a current preset voice file, and executes a multimedia file generation step, where the multimedia file generation step includes: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; then, in response to determining that the audio data includes voice data input by the user for the current preset voice file and that the playing of the preset voice file in the preset voice file set is completed, the merging unit 502 merges the video file and the audio file to generate a target multimedia file; finally, the sending unit 503 sends the authentication reference file to the server, so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file includes the target multimedia file, thereby authenticating the user by using the target multimedia file generated by shooting, avoiding performing on-the-spot authentication on the user, and improving the efficiency of authenticating the user.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device of an embodiment of the present application. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including keys, a microphone, and the like; an output portion 607 including a display screen, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. Note that when the terminal device is a tablet computer, a laptop portable computer, a desktop computer, or the like, the computer system 600 may further include a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like. A removable medium may be mounted on the drive 610 as necessary so that a computer program read therefrom is installed in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an execution unit, a merging unit, and a transmission unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, an execution unit may also be described as a "unit for performing a multimedia file generation step".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: responding to an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file generating step, wherein the multimedia file generating step comprises the following steps: playing a current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; further causing the apparatus to: in response to the fact that the audio data comprise voice data input by a user aiming at the current preset voice file and the preset voice file in the preset voice file set is played completely, combining the video file and the audio file to generate a target multimedia file; and sending the authentication reference file to the server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for outputting information, comprising:
responding to an operation request of a user for a target product, shooting the user, selecting an unplayed preset voice file from a preset voice file set as a current preset voice file, and executing a multimedia file acquisition step, wherein the multimedia file acquisition step comprises the following steps: playing the current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file; the method further comprises the following steps:
in response to the fact that the audio data comprise voice data input by the user aiming at the current preset voice file and the preset voice file in the preset voice file set is played completely, combining the video file and the audio file to generate a target multimedia file;
sending an authentication reference file to a server so that the server authenticates the user based on the authentication reference file, wherein the authentication reference file comprises the target multimedia file;
the method further comprises the following steps:
determining whether the audio data does not include voice data input by the user for the current preset voice file;
and jumping to the multimedia file acquisition step in response to determining that the audio data does not comprise voice data input by the user for the current preset voice file.
2. The method of claim 1, wherein the authentication reference further comprises a target image, the target image generated based on:
displaying product information preset for the target product;
in response to receiving an information input request of the user for the product information, acquiring text information input by the user, wherein the text information comprises a signature of the user;
and adding the acquired text information to a preset image to generate a target image, wherein the preset image comprises the product information.
3. The method of claim 2, wherein the preset image comprises a preset image area, wherein the preset image area is different from an image area of the product information on the preset image; and
the adding the acquired text information to the preset image comprises:
and adding the acquired text information into the preset image area.
4. The method of claim 1, wherein the playing the current preset voice file comprises:
and playing the current preset voice file, and displaying preset text information aiming at the played preset voice file.
5. The method of claim 1, wherein after the determining whether the audio data does not include voice data input by the user for the current preset voice file, the method further comprises:
in response to determining that the audio data includes voice data input by the user for the current preset voice file, determining whether playing of a preset voice file in the set of voice files is completed;
and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing the multimedia file acquisition step.
6. An apparatus for outputting information, comprising:
the first execution unit is configured to respond to a received operation request of a user for a target product, shoot the user, select an unplayed preset voice file from a preset voice file set as a current preset voice file, and execute a multimedia file collection step, wherein the multimedia file collection step comprises: playing the current preset voice file to obtain video data and audio data aiming at the played preset voice file; respectively encoding the obtained video data and audio data to generate a video file and an audio file;
the device further comprises:
a merging unit configured to merge the video file and the audio file to generate a target multimedia file in response to determining that the audio data includes voice data input by the user for the current preset voice file and that playing of a preset voice file in the preset voice file set is completed;
a sending unit, configured to send an authentication reference file to a server, so that the server authenticates the user based on the authentication reference file, where the authentication reference file includes the target multimedia file;
the device further comprises:
a determining unit configured to determine whether the audio data does not include voice data input by the user for the current preset voice file;
a second execution unit configured to jump to the multimedia file collection step in response to determining that the audio data does not include voice data input by the user for the current preset voice file.
7. The apparatus of claim 6, wherein the authentication reference further comprises a target image, the target image generated based on:
displaying product information preset for the target product;
in response to receiving an information input request of the user for the product information, acquiring text information input by the user, wherein the text information comprises a signature of the user;
and adding the acquired text information to a preset image to generate a target image, wherein the preset image comprises the product information.
8. The apparatus of claim 7, wherein the preset image comprises a preset image area, wherein the preset image area is different from an image area of the product information on the preset image; and
the adding the acquired text information to the preset image comprises:
and adding the acquired text information into the preset image area.
9. The apparatus of claim 6, wherein the playing the current preset voice file comprises:
and playing the current preset voice file, and displaying preset text information aiming at the played preset voice file.
10. The apparatus of claim 6, wherein the second execution unit is further configured to:
in response to determining that the audio data includes voice data input by the user for the current preset voice file, determining whether playing of a preset voice file in the set of voice files is completed;
and in response to the fact that the preset voice files in the voice file set are not played completely, selecting the preset voice files which are not played from the preset voice file set as current preset voice files, and executing the multimedia file acquisition step.
11. A terminal, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the method of any one of claims 1-5.
CN201810226177.1A 2018-03-19 2018-03-19 Method and apparatus for outputting information Active CN108391141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810226177.1A CN108391141B (en) 2018-03-19 2018-03-19 Method and apparatus for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810226177.1A CN108391141B (en) 2018-03-19 2018-03-19 Method and apparatus for outputting information

Publications (2)

Publication Number Publication Date
CN108391141A CN108391141A (en) 2018-08-10
CN108391141B true CN108391141B (en) 2020-03-31

Family

ID=63067661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810226177.1A Active CN108391141B (en) 2018-03-19 2018-03-19 Method and apparatus for outputting information

Country Status (1)

Country Link
CN (1) CN108391141B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409229B (en) * 2018-09-26 2021-11-02 江苏仲博敬陈信息科技有限公司 Article irregular distribution grabbing and identifying method based on mass image information
CN109389130B (en) * 2018-09-26 2021-11-19 常州海图电子科技有限公司 Method for grabbing and fusing irregularly distributed articles in visual identification process
CN111367592B (en) * 2018-12-07 2023-07-11 北京字节跳动网络技术有限公司 Information processing method and device
CN111866544B (en) * 2020-07-23 2022-12-02 京东科技控股股份有限公司 Data processing method, device, equipment and computer readable storage medium
CN113822195B (en) * 2021-09-23 2023-01-24 四川云恒数联科技有限公司 Government affair platform user behavior recognition feedback method based on video analysis

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697514B (en) * 2009-10-22 2016-08-24 中兴通讯股份有限公司 A kind of method and system of authentication
US20140136419A1 (en) * 2012-11-09 2014-05-15 Keith Shoji Kiyohara Limited use tokens granting permission for biometric identity verification
CN104361274B (en) * 2014-10-30 2018-02-16 深圳市富途网络科技有限公司 A kind of identity identifying method and its system based on video identification
CN105550928B (en) * 2015-12-03 2020-02-18 城银清算服务有限责任公司 System and method for remote account opening of commercial bank network
CN105610865A (en) * 2016-02-18 2016-05-25 中国银联股份有限公司 Method and device for authenticating identity of user based on transaction data
CN106600397A (en) * 2016-11-11 2017-04-26 深圳前海微众银行股份有限公司 Remote account opening method and device
CN107124415A (en) * 2017-04-28 2017-09-01 深圳市欧乐在线技术发展有限公司 A kind of service system and its implementation based on electric signal

Also Published As

Publication number Publication date
CN108391141A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN108391141B (en) Method and apparatus for outputting information
US20210166241A1 (en) Methods, apparatuses, storage mediums and terminal devices for authentication
CN110879903A (en) Evidence storage method, evidence verification method, evidence storage device, evidence verification device, evidence storage equipment and evidence verification medium
US9390245B2 (en) Using the ability to speak as a human interactive proof
WO2022052630A1 (en) Method and apparatus for processing multimedia information, and electronic device and storage medium
CN110598460B (en) Block chain-based electronic signature method and device and storage medium
CN110536075B (en) Video generation method and device
CN109271757B (en) Off-line activation method and system for software
US20210304783A1 (en) Voice conversion and verification
CN113411642A (en) Screen projection method and device, electronic equipment and storage medium
US11520806B1 (en) Tokenized voice authenticated narrated video descriptions
CN111737675A (en) Block chain-based electronic signature method and device
CN110247898B (en) Identity verification method, identity verification device, identity verification medium and electronic equipment
US11553216B2 (en) Systems and methods of facilitating live streaming of content on multiple social media platforms
CN111935155B (en) Method, apparatus, server and medium for generating target video
CN109840406B (en) Living body verification method and device and computer equipment
US8825487B2 (en) Customized audio data for verifying the authenticity of a service provider
CN113191902A (en) Transaction processing method and device based on block chain, electronic equipment and medium
CN113162770A (en) Online signature method and system
CN112612919A (en) Video resource association method, device, equipment and medium
CN110602700A (en) Seed key processing method and device and electronic equipment
CN110635993B (en) Method and apparatus for synthesizing multimedia information
JP2017134548A (en) Information processor, information processing method, and program
CN113420133B (en) Session processing method, device, equipment and storage medium
CN111279330B (en) Method and apparatus for storing and managing audio data on a blockchain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Applicant after: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Applicant before: BEIJING JINGDONG FINANCIAL TECHNOLOGY HOLDING Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Patentee after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Patentee before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Patentee after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2nd floor, Block C, 18 Kechuang Eleventh Street, Beijing Economic and Technological Development Zone, Haidian District, Beijing, 100176

Patentee before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

CP01 Change in the name or title of a patent holder