CN110929209A - Method and device for sending information - Google Patents

Method and device for sending information Download PDF

Info

Publication number
CN110929209A
CN110929209A CN201911242321.1A CN201911242321A CN110929209A CN 110929209 A CN110929209 A CN 110929209A CN 201911242321 A CN201911242321 A CN 201911242321A CN 110929209 A CN110929209 A CN 110929209A
Authority
CN
China
Prior art keywords
vector
sample
network
processed
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911242321.1A
Other languages
Chinese (zh)
Inventor
韩超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911242321.1A priority Critical patent/CN110929209A/en
Publication of CN110929209A publication Critical patent/CN110929209A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0454Architectures, e.g. interconnection topology using a combination of multiple neural nets
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods

Abstract

The embodiment of the disclosure discloses a method and a device for sending information. One embodiment of the method comprises: converting the to-be-processed portrait information of a user into to-be-processed vectors; importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed, wherein the vector generation model is used for converting the vector to be processed into target vectors corresponding to the vector to be processed and having different dimensions, and the target vectors are used for representing other image information of the user and different from the vector to be processed; and sending information to the terminal where the user is located according to the vector to be processed and the target vector. The embodiment improves the accuracy of the transmitted information.

Description

Method and device for sending information
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for sending information.
Background
With the development of information technology, the transfer of information becomes more and more frequent. People can carry out data connection with various intelligent equipment through the network, and then realize the mutual transmission of information, have improved the informationization level of people's work and life.
In the conventional method, in order to provide information to a user, a technician may mark the content of a browsed webpage of the user and the portrait information of the user, and determine the correlation between the portrait information and the browsed webpage. The contents of the user's browsed web pages and the user's representation may then be input into an intelligent algorithm to determine information that the user may need.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for sending information.
In a first aspect, an embodiment of the present disclosure provides a method for transmitting information, where the method includes: converting the to-be-processed portrait information of a user into to-be-processed vectors; importing the vectors to be processed into a vector generation model to obtain target vectors corresponding to the vectors to be processed, wherein the vector generation model is used for converting the vectors to be processed into the target vectors corresponding to the vectors to be processed and having different dimensions; and sending information to the terminal where the user is located according to the vector to be processed and the target vector.
In some embodiments, the vector generation model includes a feature extension network and a feature reduction network, and the importing the to-be-processed vector into a vector generation model to obtain a target vector corresponding to the to-be-processed vector includes: inputting the vector to be processed into the feature expansion network to obtain an expansion vector corresponding to the vector to be processed, wherein the feature expansion network is used for performing vector transformation on the vector to be processed and increasing the vector dimension of the vector to be processed; and inputting the expansion vector to the feature simplified network to obtain a target vector, wherein the feature simplified network is used for carrying out vector transformation on the expansion vector and reducing the vector dimension of the expansion vector.
In some embodiments, the vector generation model is trained by:
obtaining at least one sample vector, and generating an initial vector generation model and a mirror network corresponding to the initial vector generation model; taking each sample vector as the input of the initial vector generation model to obtain a sample target vector corresponding to each sample vector; inputting the sample target vector to the mirror network to obtain a mirror sample vector; and in response to the error between the mirror image sample vector and the corresponding sample vector being smaller than a first set threshold value, taking the initial vector generation model as a trained vector generation model.
In some embodiments, the initial vector generation model comprises: the initial feature expanding network and the initial feature simplifying network, and the step of obtaining a sample target vector corresponding to each sample vector by taking each sample vector as the input of the initial vector generation model includes: and taking each sample vector as the input of an initial feature expansion network, and inputting the output vector of the initial feature expansion network to the initial feature simplified network to obtain a sample target vector corresponding to each sample vector.
In some embodiments, the mirror network comprises: the method for obtaining the mirror image sample vector by inputting the sample target vector into the mirror image network comprises the following steps: and taking the sample target vector corresponding to each sample vector as the input of the mirror image characteristic simplified network, and inputting the output of the mirror image characteristic simplified network into the mirror image characteristic extended network to obtain the mirror image sample vector corresponding to each sample vector.
In some embodiments, the training step of the vector generation model further includes: verifying the accuracy of the sample target vector through a reference vector; and inputting the sample target vector to the mirror network in response to the error between the sample target vector and the reference vector being smaller than a second set threshold.
In a second aspect, an embodiment of the present disclosure provides an apparatus for transmitting information, the apparatus including: a to-be-processed vector obtaining unit configured to convert the to-be-processed image information of the user into a to-be-processed vector; the target vector acquisition unit is configured to introduce the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed, and the vector generation model is used for converting the vector to be processed into target vectors corresponding to different dimensions of the vector to be processed; and the information sending unit is configured to send information to the terminal where the user is located according to the vector to be processed and the target vector.
In some embodiments, the vector generation model includes a feature extension network and a feature reduction network, and the target vector obtaining unit includes: an extended vector obtaining subunit, configured to input the to-be-processed vector to the feature extended network, so as to obtain an extended vector corresponding to the to-be-processed vector, where the feature extended network is configured to perform vector transformation on the to-be-processed vector and increase a vector dimension of the to-be-processed vector; and the target vector acquisition subunit is configured to input the expansion vector to the feature reduction network to obtain a target vector, wherein the feature reduction network is used for performing vector transformation on the expansion vector and reducing the vector dimension of the expansion vector.
In some embodiments, the apparatus includes a vector generative model training unit configured to train a vector generative model, the vector generative model training unit including: the system comprises an initial setting subunit, a first mapping unit and a second mapping unit, wherein the initial setting subunit is configured to obtain at least one sample vector and generate an initial vector generation model and a mirror network corresponding to the initial vector generation model; the sample target vector acquisition subunit is configured to use each sample vector as an input of the initial vector generation model to obtain a sample target vector corresponding to each sample vector; a mirror image sample vector obtaining subunit configured to input the sample target vector to the mirror image network to obtain a mirror image sample vector; and the vector generation model judgment subunit is used for responding to the error between the mirror image sample vector and the corresponding sample vector being smaller than a first set threshold value, and is configured to take the initial vector generation model as the trained vector generation model.
In some embodiments, the initial vector generation model comprises: the initial feature extended network and the initial feature simplified network, and the sample target vector obtaining subunit includes: and the sample target vector acquisition module is configured to take each sample vector as the input of an initial feature expansion network, and input the output vector of the initial feature expansion network to the initial feature simplified network to obtain a sample target vector corresponding to each sample vector.
In some embodiments, the mirror network comprises: the image feature simplification network and the image feature extension network, and the image sample vector acquisition subunit includes: and the mirror image sample vector acquisition module is configured to take the sample target vector corresponding to each sample vector as the input of the mirror image feature simplified network, and input the output of the mirror image feature simplified network to the mirror image feature extended network to obtain the mirror image sample vector corresponding to each sample vector.
In some embodiments, the vector generation model training unit further includes: a verification subunit configured to verify the accuracy of the sample target vector by using a reference vector; and the judging subunit is configured to input the sample target vector to the mirror network in response to the error between the sample target vector and the reference vector being smaller than a second set threshold.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the method for transmitting information of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method for transmitting information of the first aspect described above.
First, to-be-processed image information of a user is converted into to-be-processed vectors. And then, importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed. The target vector is used for representing other image information of the user, which is different from the vector to be processed. And finally, sending information to the terminal where the user is located according to the vector to be processed and the target vector, so that the accuracy of information sending is improved.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for transmitting information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for transmitting information according to the present disclosure;
FIG. 4 is a flow diagram for one embodiment of a vector generation model training method according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for transmitting information according to the present disclosure;
FIG. 6 is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for transmitting information or an apparatus for transmitting information to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting user operations, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (for example, for providing distributed services), or as a single software or software module, which is not specifically limited herein.
The server 105 may be a server that provides various services, for example, a server that processes portrait information of users on the terminal apparatuses 101, 102, 103. The server can analyze and process the acquired data such as vectors to be processed and the like which can represent the portrait information of the user, and sends the information to the terminal where the user is located according to the processing result.
It should be noted that the method for sending information provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for sending information is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module, and is not limited specifically herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for transmitting information in accordance with the present disclosure is shown. The method for transmitting information includes the steps of:
step 201, converting the to-be-processed image information of the user into to-be-processed vector.
In the present embodiment, the execution subject of the method for transmitting information (e.g., the server 105 shown in fig. 1) may acquire the to-be-processed image information from the terminal where the user is located by a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In the existing method, a technician can only mark the characteristics according to the portrait information of a user part, and cannot comprehensively consider the portrait information of the user. Therefore, the accuracy of pushing information is not high in the existing method.
The execution main body can directly acquire partial portrait information of the user (for example, registration information such as work type, sex, interest and the like of the user on the terminal, and the like), and then convert the partial portrait information of the user into a vector to be processed in various ways. That is, the to-be-processed vector can represent the to-be-processed image information.
Step 202, importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed.
In order to obtain more user portrait information and improve the accuracy of pushing information, the execution main body may import the vector to be processed into the vector generation model, and obtain a target vector corresponding to the vector to be processed. The vector generation model may be configured to convert the to-be-processed vector into a target vector corresponding to the to-be-processed vector in different dimensions. The target vector is used to represent other image information different from the image information to be processed of the user. That is, the target vector includes new image information of the user.
In some optional implementation manners of this embodiment, the vector generation model may include a feature expansion network and a feature reduction network, and the importing the to-be-processed vector into the vector generation model to obtain the target vector corresponding to the to-be-processed vector may include the following steps:
firstly, inputting the vector to be processed into the feature expansion network to obtain an expansion vector corresponding to the vector to be processed.
In order to generate other possible image information by using the existing image information, the execution main body can input the vector to be processed into the feature expansion network to obtain an expansion vector corresponding to the vector to be processed. The feature expansion network may be configured to perform vector transformation on a vector to be processed, and increase a vector dimension of the vector to be processed. The feature extension network may be a network comprising a plurality of nodes, each node may receive part or all of the portrait information of the vector to be processed and set a corresponding weight for each portrait information. Then, the feature expansion network may perform weighted combination on the portrait information received by each node to obtain a corresponding expansion vector.
And secondly, inputting the expansion vector into the characteristic simplified network to obtain a target vector.
The extended vector contains the possible information of the image information, but also non-image information. The execution subject may input the expanded vector to the feature reduction network to obtain a target vector. The feature-reduced network may be configured to perform vector transformation on the extended vector and reduce a vector dimension of the extended vector.
Step 203, sending information to the terminal where the user is located according to the vector to be processed and the target vector.
As can be seen from the above description, the target vector is used to represent other image information of the user, which is different from the vector to be processed. Thus, obtaining the target vector corresponds to generating other image information of the user. And then, the execution main body can take the target vector and the vector to be processed as the input of an intelligent algorithm, so that the information is determined to be pushed to the terminal where the user is located, and the accuracy of the information pushing is improved.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for transmitting information according to the present embodiment. In the application scenario of fig. 3, the server 105 converts the to-be-processed portrait information (which may be, for example, gender, work type, etc.) acquired from the terminal device 102 into to-be-processed vectors. Then, the vector to be processed is led into the vector generation model to obtain a target vector (for example, the annual salary can be obtained). Finally, the server 105 sends information (for example, house purchasing information, etc.) to the terminal device 102 according to the vector to be processed and the target vector.
The method provided by the above embodiment of the present disclosure first converts the to-be-processed image information of the user into to-be-processed vectors. And then, importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed. The target vector is used for representing other image information of the user, which is different from the vector to be processed. And finally, sending information to the terminal where the user is located according to the vector to be processed and the target vector, so that the accuracy of information sending is improved.
With further reference to FIG. 4, a flow 400 of one embodiment of a vector generation model training method is illustrated. The process 400 of the vector generative model training method comprises the following steps:
step 401, obtaining at least one sample vector, and generating an initial vector generation model and a mirror network corresponding to the initial vector generation model.
In this embodiment, an executing entity (for example, the server 105 shown in fig. 1) of the vector generation model training method may obtain at least one sample vector through a wired connection manner or a wireless connection manner. Wherein the sample vector may be a vector representing only one piece of portrait information of the user. The execution body can also generate an initial vector generation model and a mirror network corresponding to the initial vector generation model in a random or set mode.
Step 402, taking each sample vector as the input of the initial vector generation model, and obtaining a sample target vector corresponding to each sample vector.
The execution subject may use each sample vector as an input of the initial vector generation model to obtain a sample target vector corresponding to the sample vector.
In some optional implementation manners of this embodiment, the obtaining a sample target vector corresponding to each sample vector by using each sample vector as an input of the initial vector generation model may include: and taking each sample vector as the input of an initial feature expansion network, and inputting the output vector of the initial feature expansion network into the initial feature simplified network to obtain a sample target vector corresponding to each sample vector.
The trained vector generation model comprises a feature expansion network and a feature reduction network. The initial vector generation model also comprises a corresponding initial feature expansion network and an initial feature reduction network. The execution subject may sequentially input each of the at least one sample vector into the initial feature expansion network, and obtain the sample target vector by using an output of the initial feature expansion network as an input of the initial feature reduction network.
Step 403, inputting the sample target vector to the mirror network to obtain a mirror sample vector.
The sample target vector is generated by the sample vector through the initial vector generation model. To verify the stability and interpretability of the initial vector generative model, the executing entity may input the sample target vector to the mirror network described above, resulting in a mirror sample vector.
In some optional implementation manners of this embodiment, the inputting the sample target vector to the mirror network to obtain a mirror sample vector may include: and taking the sample target vector corresponding to each sample vector as the input of the mirror image characteristic simplified network, and inputting the output of the mirror image characteristic simplified network into the mirror image characteristic extended network to obtain the mirror image sample vector corresponding to each sample vector.
The mirror network of the present application may include a mirror feature reduced network and a mirror feature extended network. The network structure mirror image of the mirror image characteristic simplified network is the same as that of the initial characteristic simplified network. That is, the input and output of the initial feature reduced network are the same as the output and input of the mirror feature reduced network, respectively. Similarly, the input and output of the mirror feature extension network are the same as the output and input, respectively, of the original feature extension network.
Similar to the process of obtaining the sample target vector, the execution subject may use the sample target vector as an input of the image feature reduction network, and use an output of the image feature reduction network as an input of the image feature extension network to obtain an image sample vector corresponding to the sample vector.
In some optional implementation manners of this embodiment, the method of this embodiment may further include the following steps:
firstly, verifying the accuracy of the sample target vector through a reference vector.
To determine the accuracy of the sample target vector, the execution subject may verify the accuracy of the sample target vector with the reference vector. The reference vector may be sample image information of the user corresponding to the sample vector. The execution subject compares the sample target vector with the reference vector, and can judge the correctness of the sample target vector.
And secondly, responding to the error between the sample target vector and the reference vector being smaller than a second set threshold value, and inputting the sample target vector to the mirror network.
When the error between the sample target vector and the reference vector is smaller than a second set threshold, it indicates that the weight values corresponding to the nodes in the initial feature expansion network and the initial feature reduction network included in the initial vector generation model at this time are accurate. At this time, the execution principal may input the sample target vector to the mirror network. For example, the second set threshold is 5%. The sample image information of the user corresponding to the sample vector is: XX engineer. The sample target vector is 50 thousands of annual salaries, the reference vector is 51 thousands of annual salaries, the error is 1 ten thousands, and the error is smaller than a second set threshold. At this time, the execution principal may input the sample target vector to the mirror network.
And step 404, in response to that the error between the mirror image sample vector and the corresponding sample vector is smaller than a first set threshold, taking the initial vector generation model as a trained vector generation model.
After obtaining the mirror sample vector, the execution principal may compare the error between the mirror sample vector and the sample vector. When the error between the mirror image sample vector and the sample vector is smaller than a first set threshold value, the initial vector generation model is proved to have reliable information conversion capability. And the accuracy of the weight corresponding to each node in the initial feature expansion network and the initial feature simplification network in the initial vector generation model is high. At this time, the executing agent may take the initial vector generation model as the trained vector generation model.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for sending information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for transmitting information of the present embodiment may include: a to-be-processed vector acquisition unit 501, a target vector acquisition unit 502, and an information transmission unit 503. The to-be-processed vector obtaining unit 501 is configured to convert the to-be-processed image information of the user into a to-be-processed vector; the target vector obtaining unit 502 is configured to import the to-be-processed vector into a vector generation model, so as to obtain a target vector corresponding to the to-be-processed vector, where the vector generation model is configured to convert the to-be-processed vector into target vectors corresponding to different dimensions of the to-be-processed vector, and the target vectors are used for representing other portrait information of the user, which is different from the to-be-processed vector; an information sending unit 503, configured to send information to the terminal where the user is located according to the to-be-processed vector and the target vector.
In some optional implementations of this embodiment, the vector generation model includes a feature expansion network and a feature reduction network, and the target vector obtaining unit 502 may include: an extended vector acquisition subunit (not shown in the figure) and a target vector acquisition subunit (not shown in the figure). The extended vector acquisition subunit is configured to input the vector to be processed to the feature extended network, so as to obtain an extended vector corresponding to the vector to be processed, where the feature extended network is configured to perform vector transformation on the vector to be processed, and increase the vector dimension of the vector to be processed; the target vector obtaining subunit is configured to input the extended vector to the feature reduction network to obtain a target vector, where the feature reduction network is configured to perform vector transformation on the extended vector and reduce a vector dimension of the extended vector.
In some optional implementations of this embodiment, the apparatus 500 for sending information may include a vector generation model training unit (not shown in the figure) configured to train a vector generation model, where the vector generation model training unit may include: an initial setting subunit (not shown in the figure), a sample target vector obtaining subunit (not shown in the figure), a mirror sample vector obtaining subunit (not shown in the figure), and a vector generation model judging subunit (not shown in the figure). The initial setting subunit is configured to obtain at least one sample vector and generate an initial vector generation model and a mirror network corresponding to the initial vector generation model; the sample target vector obtaining subunit is configured to use each sample vector as an input of the initial vector generation model to obtain a sample target vector corresponding to each sample vector in the at least one sample vector; a mirror image sample vector obtaining subunit configured to input the sample target vector to the mirror image network to obtain a mirror image sample vector; the vector generation model judgment subunit is configured to take the initial vector generation model as a trained vector generation model in response to an error between the mirror image sample vector and the corresponding sample vector being smaller than a first set threshold.
In some optional implementations of this embodiment, the initial vector generation model includes: the initial feature expanding network and the initial feature reducing network, and the sample target vector obtaining subunit may include: and a sample target vector obtaining module (not shown in the figure) configured to take each sample vector as an input of an initial feature expansion network, and input an output vector of the initial feature expansion network to the initial feature reduction network to obtain a sample target vector corresponding to each sample vector.
In some optional implementations of this embodiment, the mirroring network includes: the mirror image feature reduction network and the mirror image feature extension network, and the mirror image sample vector obtaining subunit may include: a mirror image sample vector obtaining module (not shown in the figure) configured to take a sample target vector corresponding to each sample vector as an input of the mirror image feature simplified network, and input an output of the mirror image feature simplified network to the mirror image feature extended network to obtain a mirror image sample vector corresponding to each sample vector.
In some optional implementations of this embodiment, the vector generation model training unit further includes: a verification subunit (not shown in the figure) and a judgment subunit (not shown in the figure). The verification subunit is configured to verify the accuracy of the sample target vector through a reference vector; and the judging subunit is configured to input the sample target vector to the mirror network in response to the error between the sample target vector and the reference vector being smaller than a second set threshold.
The present embodiment also provides an electronic device, including: one or more processors; a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to perform the above-described method for transmitting information.
The present embodiment also provides a computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned method for transmitting information.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use with an electronic device (e.g., server 105 of FIG. 1) to implement an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium mentioned above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: converting the to-be-processed portrait information of a user into to-be-processed vectors; importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed, wherein the vector generation model is used for converting the vector to be processed into target vectors corresponding to the vector to be processed and having different dimensions, and the target vectors are used for representing other image information of the user and different from the vector to be processed; and sending information to the terminal where the user is located according to the vector to be processed and the target vector.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a vector to be processed acquisition unit, a target vector acquisition unit, and an information transmission unit. The names of these units do not, in some cases, constitute a limitation on the units themselves, and for example, the information transmission unit may also be described as a "unit that transmits information based on the generated portrait information".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (14)

1. A method for transmitting information, comprising:
converting the to-be-processed portrait information of a user into to-be-processed vectors;
importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed, wherein the vector generation model is used for converting the vector to be processed into target vectors corresponding to different dimensions of the vector to be processed;
and sending information to the terminal where the user is located according to the vector to be processed and the target vector.
2. The method of claim 1, wherein the vector generation model comprises a feature extension network and a feature reduction network, and
the step of importing the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed includes:
inputting the vector to be processed into the feature expansion network to obtain an expansion vector corresponding to the vector to be processed, wherein the feature expansion network is used for performing vector transformation on the vector to be processed and increasing the vector dimension of the vector to be processed;
and inputting the expansion vector into the feature simplified network to obtain a target vector, wherein the feature simplified network is used for carrying out vector transformation on the expansion vector and reducing the vector dimension of the expansion vector.
3. The method of claim 1, wherein the vector generation model is trained by:
obtaining at least one sample vector, and generating an initial vector generation model and a mirror network corresponding to the initial vector generation model;
taking each sample vector as an input of the initial vector generation model to obtain a sample target vector corresponding to each sample vector;
inputting the sample target vector into the mirror image network to obtain a mirror image sample vector;
and in response to the error between the mirror image sample vector and the corresponding sample vector being smaller than a first set threshold, taking the initial vector generative model as a trained vector generative model.
4. The method of claim 3, wherein the initial vector generation model comprises: an initial feature extended network and an initial feature reduced network, an
Taking each sample vector as an input of the initial vector generation model to obtain a sample target vector corresponding to each sample vector, including:
and taking each sample vector as the input of an initial feature expansion network, and inputting the output vector of the initial feature expansion network into the initial feature simplified network to obtain a sample target vector corresponding to each sample vector.
5. The method of claim 4, wherein the mirroring network comprises: mirrored feature reduction network and mirrored feature extension network, and
inputting the sample target vector into the mirror network to obtain a mirror sample vector, including:
and taking the sample target vector corresponding to each sample vector as the input of the image characteristic simplified network, and inputting the output of the image characteristic simplified network to the image characteristic extended network to obtain an image sample vector corresponding to each sample vector.
6. The method of claim 3, wherein the training of the vector generation model further comprises:
verifying the accuracy of the sample target vector through a reference vector;
inputting the sample target vector to the mirror network in response to an error of the sample target vector from a reference vector being less than a second set threshold.
7. An apparatus for transmitting information, comprising:
a to-be-processed vector obtaining unit configured to convert the to-be-processed image information of the user into a to-be-processed vector;
the target vector acquisition unit is configured to introduce the vector to be processed into a vector generation model to obtain a target vector corresponding to the vector to be processed, and the vector generation model is used for converting the vector to be processed into target vectors corresponding to different dimensions of the vector to be processed;
and the information sending unit is configured to send information to the terminal where the user is located according to the vector to be processed and the target vector.
8. The apparatus of claim 7, wherein the vector generation model comprises a feature extension network and a feature reduction network, and
the target vector acquisition unit includes:
the extended vector obtaining subunit is configured to input the vector to be processed to the feature extended network, so as to obtain an extended vector corresponding to the vector to be processed, where the feature extended network is configured to perform vector transformation on the vector to be processed and increase the vector dimension of the vector to be processed;
and the target vector acquisition subunit is configured to input the expansion vector to the feature reduction network to obtain a target vector, wherein the feature reduction network is used for performing vector transformation on the expansion vector and reducing the vector dimension of the expansion vector.
9. The apparatus of claim 7, wherein the apparatus comprises a vector generative model training unit configured to train a vector generative model, the vector generative model training unit comprising:
the system comprises an initial setting subunit, a first setting unit and a second setting unit, wherein the initial setting subunit is configured to obtain at least one sample vector and generate an initial vector generation model and a mirror image network corresponding to the initial vector generation model;
a sample target vector obtaining subunit configured to use each of the sample vectors as an input of the initial vector generation model to obtain a sample target vector corresponding to each of the sample vectors;
a mirror image sample vector obtaining subunit configured to input the sample target vector to the mirror image network, so as to obtain a mirror image sample vector;
a vector generation model judgment subunit configured to, in response to an error between the mirror sample vector and the corresponding sample vector being smaller than a first set threshold, take the initial vector generation model as a trained vector generation model.
10. The apparatus of claim 9, wherein the initial vector generation model comprises: an initial feature extended network and an initial feature reduced network, an
The sample target vector acquisition subunit includes:
and the sample target vector acquisition module is configured to take each sample vector as the input of an initial feature expansion network, and input the output vector of the initial feature expansion network into the initial feature simplification network to obtain a sample target vector corresponding to each sample vector.
11. The apparatus of claim 10, wherein the mirroring network comprises: mirrored feature reduction network and mirrored feature extension network, and
the mirror sample vector acquisition subunit includes:
and the mirror image sample vector acquisition module is configured to take the sample target vector corresponding to each sample vector as the input of the mirror image feature simplified network, and input the output of the mirror image feature simplified network to the mirror image feature expansion network to obtain the mirror image sample vector corresponding to each sample vector.
12. The apparatus of claim 9, wherein the vector generative model training unit further comprises:
a verification subunit configured to verify an accuracy of the sample target vector by a reference vector;
a determining subunit, responsive to an error of the sample target vector from a reference vector being less than a second set threshold, configured to input the sample target vector to the mirror network.
13. An electronic device, comprising:
one or more processors;
a memory having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN201911242321.1A 2019-12-06 2019-12-06 Method and device for sending information Pending CN110929209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911242321.1A CN110929209A (en) 2019-12-06 2019-12-06 Method and device for sending information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242321.1A CN110929209A (en) 2019-12-06 2019-12-06 Method and device for sending information

Publications (1)

Publication Number Publication Date
CN110929209A true CN110929209A (en) 2020-03-27

Family

ID=69857333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242321.1A Pending CN110929209A (en) 2019-12-06 2019-12-06 Method and device for sending information

Country Status (1)

Country Link
CN (1) CN110929209A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313086A (en) * 2021-07-28 2021-08-27 长沙海信智能系统研究院有限公司 Feature vector conversion model processing method, device, server and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313086A (en) * 2021-07-28 2021-08-27 长沙海信智能系统研究院有限公司 Feature vector conversion model processing method, device, server and storage medium
CN113313086B (en) * 2021-07-28 2021-10-29 长沙海信智能系统研究院有限公司 Feature vector conversion model processing method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN109800732B (en) Method and device for generating cartoon head portrait generation model
WO2020199659A1 (en) Method and apparatus for determining push priority information
CN110929209A (en) Method and device for sending information
CN111354345B (en) Method, apparatus, device and medium for generating speech model and speech recognition
CN110009101B (en) Method and apparatus for generating a quantized neural network
CN113468344B (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN110084298B (en) Method and device for detecting image similarity
JP2021103506A (en) Method and device for generating information
US20210279109A1 (en) Method and apparatus for acquiring information
CN111324470A (en) Method and device for generating information
CN109598344B (en) Model generation method and device
US20200219161A1 (en) Method and apparatus for generating information
CN111278085A (en) Method and device for acquiring target network
CN111767290A (en) Method and apparatus for updating a user representation
CN110991661A (en) Method and apparatus for generating a model
CN112464039A (en) Data display method and device of tree structure, electronic equipment and medium
CN113778846A (en) Method and apparatus for generating test data
CN112417151A (en) Method for generating classification model and method and device for classifying text relation
CN112764652A (en) Data storage method, device, equipment and medium based on workflow engine
CN112507676A (en) Energy report generation method and device, electronic equipment and computer readable medium
CN113486749A (en) Image data collection method, device, electronic equipment and computer readable medium
CN113191257A (en) Order of strokes detection method and device and electronic equipment
CN110619537A (en) Method and apparatus for generating information
CN111026849A (en) Data processing method and device
CN111797263A (en) Image label generation method, device, equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination