CN107910060A - Method and apparatus for generating information - Google Patents
Method and apparatus for generating information Download PDFInfo
- Publication number
- CN107910060A CN107910060A CN201711242792.3A CN201711242792A CN107910060A CN 107910060 A CN107910060 A CN 107910060A CN 201711242792 A CN201711242792 A CN 201711242792A CN 107910060 A CN107910060 A CN 107910060A
- Authority
- CN
- China
- Prior art keywords
- type
- illness
- description information
- initial
- feature selection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating information.One embodiment of this method includes:The illness description information at least two types for describing same illness is obtained, wherein, the type of illness description information includes text type, sound-type or image type;For acquired each type of illness description information, the illness description information of this type is imported to the Feature Selection Model corresponding with this type pre-established, generation characteristic information corresponding with the illness description information of this type, wherein, Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;The characteristic information generated is imported to the Fusion Features model pre-established, generates the probable value that the described illness of illness description information belongs to predefined illness, wherein, this feature Fusion Model is used for the correspondence between characteristic feature information and probable value.The embodiment enriches the mode of generation information.
Description
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field, it is more particularly, to raw
Into the method and apparatus of information.
Background technology
Sick when, people need hospital or clinic, seek the help of doctor.But at present for, entity
Medical resource is limited to be difficult to meet the needs of people are to medical resource.With the development of computer and internet, can utilize each
The online medical mode of kind form.
The content of the invention
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, including:Obtain and describe same disease
The illness description information of at least two types of disease, wherein, the type of illness description information include text type, sound-type or
Image type;For acquired each type of illness description information, the illness description information of this type is imported advance
The Feature Selection Model corresponding with this type established, generates feature letter corresponding with the illness description information of this type
Breath, wherein, Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;The spy that will be generated
Reference breath imports the Fusion Features model pre-established, generates the above-mentioned described illness of illness description information and belongs to predefined
The probable value of illness, wherein, features described above Fusion Model is used for the correspondence between characteristic feature information and probable value.
In certain embodiments, above-mentioned image type includes at least two subgraph types, spy corresponding with image type
Levying extraction model includes the corresponding subgraph feature extraction mould of every drawing of seeds picture type in above-mentioned at least two subgraphs type
Type, wherein, subgraph Feature Selection Model is used for the correspondence between phenogram picture and image feature information.
In certain embodiments, it is above-mentioned for acquired each type of illness description information, by the disease of this type
Disease description information imports the Feature Selection Model corresponding with this type pre-established, and generation and the illness of this type describe
The corresponding characteristic information of information, including:Illness description information in response to determining at least two acquired types includes image
The illness description information of type, determines subgraph Feature Selection Model corresponding with the subgraph type of acquired image, its
In, acquired image is the illness description letter of the image type in the illness description information of at least two acquired types
Breath;Subgraph Feature Selection Model, the characteristics of image for generating acquired image determined by acquired image importing are believed
Breath.
In certain embodiments, Feature Selection Model is neural network model, and it is complete that above-mentioned neural network model includes first
Articulamentum;And it is above-mentioned for acquired each type of illness description information, the illness description information of this type is led
Enter the Feature Selection Model corresponding with this type pre-established, generate spy corresponding with the illness description information of this type
Reference ceases, including:For acquired each type of illness description information, the illness description information of this type is imported pre-
The Feature Selection Model corresponding with this type first established, it is defeated by the first full articulamentum of the Feature Selection Model of this type
Go out characteristic information corresponding with the illness description information of this type.
In certain embodiments, it is above-mentioned that the characteristic information generated is imported to the Fusion Features model pre-established, generation
The above-mentioned described illness of illness description information belongs to the probable value of predefined illness, including:Splice at least two features to carry
The characteristic information of first full articulamentum output of modulus type;Spliced characteristic information is imported into features described above Fusion Model.
In certain embodiments, Feature Selection Model and features described above Fusion Model are through the following steps that training obtains
's:At least two initial first nerves networks and initial nervus opticus network are obtained, wherein, each initial first nerves network
Export as the input of above-mentioned initial nervus opticus network, wherein, Feature Selection Model is obtained based on initial first nerves network, spy
Sign Fusion Model is based on above-mentioned initial nervus opticus network and obtains;Sample set is obtained, sample includes describing same illness at least
Two kinds of illness describes sample and illness describes the illness mark of the illness belonging to the illness of pattern representation;Using above-mentioned
Sample set, is trained above-mentioned at least two initial first nerves networks and above-mentioned initial nervus opticus network, after training
Initial first nerves network as Feature Selection Model, using the initial nervus opticus network after training as Fusion Features mould
Type.
In certain embodiments, features described above Fusion Model includes the second full articulamentum, above-mentioned initial nervus opticus network
Including the initial second full articulamentum, wherein, above-mentioned initial second full articulamentum includes at least two nodes;It is and above-mentioned using upper
Sample set is stated, above-mentioned at least two initial first nerves network corresponding to features described above extraction model and above-mentioned initial second god
It is trained through network, including:Using discarding method, by least two nodes included by above-mentioned initial second full articulamentum
Part of nodes weight zero setting.
Second aspect, the embodiment of the present application provide a kind of device for being used to generate information, including:Acquiring unit, is used for
The illness description information at least two types for describing same illness is obtained, wherein, the type of illness description information includes text
Type, sound-type or image type;First generation unit, for for acquired each type of illness description information,
The illness description information of this type is imported to the Feature Selection Model corresponding with this type pre-established, generation and this kind
The corresponding characteristic information of illness description information of type, wherein, Feature Selection Model is used to characterize illness description information and feature
Correspondence between information;Second generation unit, for the characteristic information generated to be imported the Fusion Features pre-established
Model, generates the probable value that the above-mentioned described illness of illness description information belongs to predefined illness, wherein, features described above is melted
Molding type is used for the correspondence between characteristic feature information and probable value.
In certain embodiments, above-mentioned image type includes at least two subgraph types, spy corresponding with image type
Levying extraction model includes the corresponding subgraph feature extraction mould of every drawing of seeds picture type in above-mentioned at least two subgraphs type
Type, wherein, subgraph Feature Selection Model is used for the correspondence between phenogram picture and image feature information.
In certain embodiments, above-mentioned first generation unit, is additionally operable to:In response to determining at least two acquired types
Illness description information include the illness description information of image type, determine corresponding with the subgraph type of acquired image
Subgraph Feature Selection Model, wherein, acquired image is in the illness description information of at least two acquired types
The illness description information of image type;Subgraph Feature Selection Model, generation determined by acquired image importing are obtained
The image feature information of the image taken.
In certain embodiments, Feature Selection Model is neural network model, and it is complete that above-mentioned neural network model includes first
Articulamentum;And above-mentioned first generation unit, it is additionally operable to:, will for acquired each type of illness description information
The illness description information of this type imports the Feature Selection Model corresponding with this type pre-established, by this type
First full articulamentum of Feature Selection Model exports characteristic information corresponding with the illness description information of this type.
In certain embodiments, above-mentioned second generation unit, is additionally operable to:Splice the first of at least two Feature Selection Models
The characteristic information of full articulamentum output;Spliced characteristic information is imported into features described above Fusion Model.
In certain embodiments, Feature Selection Model and features described above Fusion Model are through the following steps that training obtains
's:At least two initial first nerves networks and initial nervus opticus network are obtained, wherein, each initial first nerves network
Export as the input of above-mentioned initial nervus opticus network, wherein, Feature Selection Model is obtained based on initial first nerves network, spy
Sign Fusion Model is based on above-mentioned initial nervus opticus network and obtains;Sample set is obtained, sample includes describing same illness at least
Two kinds of illness describes sample and illness describes the illness mark of the illness belonging to the illness of pattern representation;Using above-mentioned
Sample set, is trained above-mentioned at least two initial first nerves networks and above-mentioned initial nervus opticus network, after training
Initial first nerves network as Feature Selection Model, using the initial nervus opticus network after training as Fusion Features mould
Type.
In certain embodiments, features described above Fusion Model includes the second full articulamentum, above-mentioned initial nervus opticus network
Including the initial second full articulamentum, wherein, above-mentioned initial second full articulamentum includes at least two nodes;It is and above-mentioned using upper
Sample set is stated, above-mentioned at least two initial first nerves network corresponding to features described above extraction model and above-mentioned initial second god
It is trained through network, including:Using discarding method, by least two nodes included by above-mentioned initial second full articulamentum
Part of nodes weight zero setting.
The third aspect, the embodiment of the present application provide a kind of server, and above-mentioned server includes:One or more processing
Device;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple processors
During execution so that said one or multiple processors realization such as the method for first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, realizes the method such as first aspect when which is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information, same illness is described at least by obtaining
Two kinds of illness description information, wherein, the type of illness description information includes text type, sound-type or image class
Type;For acquired each type of illness description information, the illness description information of this type is imported what is pre-established
Feature Selection Model corresponding with this type, generates characteristic information corresponding with the illness description information of this type, wherein,
Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;The characteristic information generated is led
Enter the Fusion Features model pre-established, generate the above-mentioned described illness of illness description information and belong to the general of predefined illness
Rate value, so as to enrich the mode of generation information.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart for being used to generate one embodiment of the method for information according to the application;
Fig. 3 A and Fig. 3 B are the schematic diagrames for being used to generate an application scenarios of the method for information according to the application;
Fig. 4 is the flow chart for being used to generate a kind of implementation of the method for information according to the application;
Fig. 5 A and Fig. 5 B are the schematic diagrames according to the discarding method of the application;
Fig. 6 is the structure diagram for being used to generate one embodiment of the device of information according to the application;
Fig. 7 is adapted for the structure diagram of the computer system of the server for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
It illustrate only easy to describe, in attached drawing and invent relevant part with related.
It should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for being used to generate information that can apply the application or the implementation of the device for generating information
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 can be to provide the medium of communication link between terminal device 101,102,103 and server 105.Network
104 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as interrogation class is applied, i.e. on terminal device 101,102,103
When means of communication, web browser applications, shopping class application, searching class application, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments for having display screen, include but not limited to intelligent hand
Machine, tablet computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer
III, dynamic image expert's compression standard audio aspect 3), MP4 (Moving Picture Experts Group Audio
Layer IV, dynamic image expert's compression standard audio aspect 4) player, pocket computer on knee and desktop computer etc.
Deng.
Server 105 can be to provide the server of various services, such as to the interrogation on terminal device 101,102,103
Class application provides the background server supported.Background server can dock the data such as received illness counsel requests and be analyzed
Deng processing, and handling result (such as illness title) is fed back into terminal device.
It should be noted that the method for being used to generate information that the embodiment of the present application is provided generally is held by server 105
OK, correspondingly, it is generally positioned at for generating the device of information in server 105.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realizing need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates the flow for being used to generate one embodiment of the method for information according to the application
200.The above-mentioned method for generating information, comprises the following steps:
Step 201, the illness description information at least two types for describing same illness is obtained.
In the present embodiment, electronic equipment (such as the service shown in Fig. 1 thereon is run for generating the method for information
Device) can be from the illness description information of at least two types of the same illness of local or other electronic equipments acquisition description.
In the present embodiment, above-mentioned electronic equipment directly can receive above-mentioned illness description information from other electronic equipments,
To obtain above-mentioned illness description information;Stored after above-mentioned illness description information can also be received from other electronic equipments to local,
Again above-mentioned illness description information is obtained from local.
In the present embodiment, above-mentioned illness can be various illnesss.As an example, above-mentioned illness can be that skin is relevant
Illness or the relevant illness of internal organ or the relevant illness of bone.
As an example, user feels that any position of body is uncomfortable or abnormal, can be by the abnormal conditions at this position
Information, sends to above-mentioned server.The abnormal conditions at this position, can be said using the description information of at least two types
It is bright.
In the present embodiment, the type of illness description information can include but is not limited to text type, sound-type or figure
As type.
In the present embodiment, above-mentioned image type can include still image type, such as picture;It can also include dynamic
Image type, such as video.
As an example, user has found that redness occurs in skin, illness that can be using the photo of capturing skin as image type is retouched
Information is stated, illness description information of the text " red swelling of the skin " as text type can also be inputted.User can also pass through terminal
The photo of the skin of shooting and above-mentioned text " red swelling of the skin " are sent to server.
Step 202, for acquired each type of illness description information, the illness description information of this type is led
Enter the Feature Selection Model corresponding with this type pre-established, generate spy corresponding with the illness description information of this type
Reference ceases.
In the present embodiment, above-mentioned electronic equipment can be for acquired each type of illness description information, by this
The illness description information of type imports the Feature Selection Model corresponding with this type pre-established, generation and this type
The corresponding characteristic information of illness description information.
In the present embodiment, features described above extraction model is used to characterize corresponding between illness description information and characteristic information
Relation.
Alternatively, features described above extraction model can be the corresponding text feature with the illness description information of text type
Extraction model.The illness description information of text type can be imported Text character extraction model, obtain text feature information.
Alternatively, features described above extraction model can also be special with the corresponding voice of the illness description information of sound-type
Levy extraction model.The illness description information of sound-type can be imported speech feature extraction model, obtain phonetic feature letter
Breath.
As an example, above-mentioned speech feature extraction model can be known before above-mentioned Text character extraction model plus voice
Other module obtains.Above-mentioned sound identification module can be known the illness description information of the above-mentioned sound-type received using voice
Other technology, is converted to text.Then above-mentioned text is imported into above-mentioned Text character extraction model again.
Alternatively, features described above extraction model can be the corresponding characteristics of image with the illness description information of image type
Extraction model.The illness description information of image type can be imported image characteristics extraction model, obtain image feature information.
As an example, above-mentioned image characteristics extraction model can be picture feature extraction model.
As an example, above-mentioned image characteristics extraction model can also be video feature extraction model.
In some optional implementations of the present embodiment, features described above extraction model can be mapping table.Make
For example, if Feature Selection Model is Text character extraction model, this mapping table can prestore a large amount of texts and
The corresponding text feature of text.If the illness description information of user's input text type, then, can be by above-mentioned text type
Illness description information matched with a large amount of texts prestored, obtain the illness description information with above-mentioned text type
The text matched somebody with somebody.Then, by the corresponding text feature of this text, the corresponding text of illness description information as text type is special
Sign.Image characteristics extraction model and speech feature extraction model can also be mapping tables, special using process and above-mentioned text
The use process of sign extraction model is similar, and details are not described herein.
Step 203, the characteristic information generated is imported to the Fusion Features model pre-established, generates illness description information
Described illness belongs to the probable value of predefined illness.
In the present embodiment, the characteristic information generated can be imported the Fusion Features pre-established by above-mentioned electronic equipment
Model, generates the probable value that the above-mentioned described illness of illness description information belongs to predefined illness.
In the present embodiment, features described above Fusion Model is used for the correspondence between characteristic feature information and probable value.
In the present embodiment, it can also be multiple that above-mentioned predefined illness, which can be one,.
In some optional implementations of the present embodiment, the output of Fusion Features model can be all predefined
The probable value of illness.
In some optional implementations of the present embodiment, the output of Fusion Features model can also be that part is predefined
Illness probable value.The probable value of all predefined illnesss is first generated, then chooses the higher part of probable value, is carried out
Output.
As an example, user can be sent the photo of the skin of shooting and above-mentioned text " red swelling of the skin " extremely by terminal
Above-mentioned electronic equipment.Above-mentioned electronic equipment utilizes method shown in the present embodiment, can be from Fusion Features model output information " mosquito
Bite, 95% " and " endocrine disorder, 60% ".Above-mentioned electronic equipment can also be from features described above Fusion Model output information
" bite by mosquitos, 95% ".
In some optional implementations of the present embodiment, Fusion Features model can be mapping table.It is right at this
The a large amount of characteristic informations of storage and probable value can be corresponded to by answering in relation table.As an example, largely it is stored with the mapping table
Text feature information and image feature information pair and with text feature information and image feature information to corresponding illness title
And probable value.Can be and right by the characteristic information (such as including text feature information and image feature information) that step 202 generates
The text feature information in relation table and image feature information is answered to obtain the characteristic information generated with step 202 to matching
Matched text feature information and image feature information pair, and by text characteristic information and image feature information to corresponding disease
Disease title and probable value, the output as Fusion Features model.
In some optional implementations of the present embodiment, what above-mentioned electronic equipment can export Fusion Features model
Illness identifies and probable value, sends to terminal, so that terminal is shown to user.
It is the application for being used to generate the method for information according to the present embodiment with continued reference to Fig. 3 A and Fig. 3 B, Fig. 3 A and Fig. 3 B
The schematic diagram of scene.In the application scenarios of Fig. 3 A, user initiates an interrogation first with terminal 301 to server 302 please
303 are asked, interrogation request 303 includes the illness description information of at least two types.As shown in Figure 3B, Fig. 3 B show server
Processing procedure.The illness description information of at least two types can be such as text " red swelling of the skin " 3031 and the skin clapped
Picture 3032.Server 302 can obtain the illness description information of above-mentioned at least two type from the background.Then, above-mentioned server
Text " red swelling of the skin " 3031 is imported Text character extraction model 3041, generation text feature information 3051 by 302;By what is clapped
Skin picture imports image characteristics extraction model 3042, generates image feature information 3052.After again, above-mentioned server is by generation
Text feature information 3051 and image feature information 3052 import Fusion Features model 306, generate predefined illness title and
Probable value 307, for example, " bite by mosquitos, 95% " and " endocrine disorder, 60% ".Finally, above-mentioned server can be by illness
Title and probable value 307 are sent to above-mentioned terminal, by terminal display to user.
The method that above-described embodiment of the application provides, by the illness for obtaining at least two types for describing same illness
Description information, wherein, the type of illness description information includes text type, sound-type or image type;For acquired
Each type of illness description information, the illness description information importing of this type is pre-established corresponding with this type
Feature Selection Model, generates characteristic information corresponding with the illness description information of this type, wherein, Feature Selection Model is used for
Characterize the correspondence between illness description information and characteristic information;The characteristic information generated is imported to the feature pre-established
Fusion Model, generates the probable value that the above-mentioned described illness of illness description information belongs to predefined illness, there is provided more
Abundant information generating mode.
In some optional implementations of the present embodiment, above-mentioned image type can include at least two subgraph classes
Type, Feature Selection Model corresponding with image type can include every drawing of seeds picture class in above-mentioned at least two subgraphs type
The corresponding subgraph Feature Selection Model of type.
In this implementation, for image type, from different angles, different subgraph types can be divided into.
As an example, it is that dynamic or static state distinguish from image, subgraph type can be picture and video.
As an example, in the case of disease interrogation, from the angle of medical image, subgraph type can be x-ray image,
CT scan (Computed Tomography, CT) image, positron emission tomoscan (Positron
Emission Tomography, PET) image etc..
In this implementation, above-mentioned subgraph Feature Selection Model is used for corresponding between phenogram picture and characteristics of image
Relation.
In some optional implementations of the present embodiment, above-mentioned steps 202 can include:It is acquired in response to determining
The sick description informations of at least two types include the illness description information of image type, determine the subgraph with acquired image
As the corresponding subgraph Feature Selection Model of type.By acquired image import determined by subgraph Feature Selection Model,
The image feature information of the acquired image of generation.
In this implementation, above-mentioned acquired image is in the illness description information of at least two acquired types
Image type illness description information.
As an example, the image type that above-mentioned electronic equipment obtains, the image for the skin that can have both been clapped including user,
It can include x-ray image.
In some optional implementations of the present embodiment, the model of features described above extraction model can be neutral net
Model.Herein, neural network model includes the first full articulamentum positioned at last layer.Last layer can be neutral net
The output layer of model.
As an example, neural network model is the model based on artificial neural network.Artificial neural network can be simulation
The actual neural network of the mankind, the network system that is made of great deal of nodes (also referred to as processing unit or neuron).
As an example, neural network model can be various types of neural network models, include but not limited to convolution god
Through network model, Recognition with Recurrent Neural Network model etc..
As an example, full articulamentum can be connected with all nodes of full articulamentum last layer.
As an example, articulamentum is if the convolutional layer with multiple passages entirely, i.e., using multiple 1*1 convolution kernels to upper one
All nodes of layer carry out computings, and the output of full articulamentum can be one-dimensional vector, the quantity of the vector element of the one-dimensional vector with
Port number (convolution check figure) is identical.
In some optional implementations of the present embodiment, above-mentioned steps 202 can also include:For acquired every
This kind of illness description information, is imported the feature extraction corresponding with this type pre-established by the illness description information of type
Model, it is corresponding with the illness description information of this type by the first full articulamentum output of the Feature Selection Model of this type
Characteristic information.
In some optional implementations of the present embodiment, above-mentioned steps 203 can include:Splice at least two features
The characteristic information of first full articulamentum output of extraction model.Spliced characteristic information is imported into features described above Fusion Model.
, can be by the first of two Feature Selection Models the full articulamentum as an example, if Feature Selection Model is two
The characteristic information of output, is combined, and completes splicing.
As an example, Feature Selection Model can be Text character extraction model and image characteristics extraction model.Text is special
First full articulamentum output of sign extraction model is one-dimensional vector, and vector element number is 2;Image characteristics extraction model it is defeated
It is also one-dimensional vector to go out, and vector element number is 3.The one-dimensional vector that vector element number is 3 can be spelled in element vector
After plain number is 2 one-dimensional vector, the one-dimensional vector that vector element number is 5 is obtained.By this vector element number be 5 one
Dimensional vector imports features described above Fusion Model.
In some optional implementations of the present embodiment, as shown in figure 4, features described above extraction model and features described above
Fusion Model can be trained by the step in flow 400:
Step 401, at least two initial first nerves networks and initial nervus opticus network are obtained.
It should be noted that the executive agent of flow 400 and the executive agent of flow 200, can be identical, can not also
Together.
In this implementation, initial neutral net can be various neutral nets, for example, convolutional neural networks, circulation
Neutral net, shot and long term Memory Neural Networks etc..
As an example, initial neutral net, can refer to first nerves network is unbred or does not train completion.
Each layer of initial neutral net can be provided with initial parameter, and parameter can be adjusted constantly in the training process.
As an example, the artificial neural network that initial neutral net can be various types of indisciplines or training is not completed
Network or the artificial neural network that to a variety of indisciplines or not training is not completed are combined obtained model, for example, initially
Neutral net can be unbred convolutional neural networks or unbred Recognition with Recurrent Neural Network, can also be
Institute is combined to unbred convolutional neural networks, unbred Recognition with Recurrent Neural Network and unbred full articulamentum
Obtained model.
As an example, initial first nerves network can include initial first god of the initial first nerves network of text, voice
Through network and image initial first nerves network.The initial first nerves network of text is used to extract text feature, is obtained after training
Text character extraction model.The initial first nerves network of voice is used to extract phonetic feature, and speech feature extraction is obtained after training
Model.Image initial first nerves network is used to extract characteristics of image, and image characteristics extraction model is obtained after training.
In this implementation, the output of each initial first nerves network is the defeated of above-mentioned initial nervus opticus network
Enter.
In this implementation, Feature Selection Model is obtained based on initial first nerves network, and Fusion Features model is based on
Initial nervus opticus network obtains.
As an example, the initial first nerves network of text can be unbred neutral net, can also be in step
Before 401, using samples of text, the neutral net of pre-training was carried out.The initial first nerves network of text can be without instruction
Experienced neutral net, can also be before step 401, using image pattern, carry out the neutral net of pre-training.At the beginning of voice
Beginning first nerves network can be unbred neutral net, can also be before step 401, using speech samples, into
Went the neutral net of pre-training.
Step 402, sample set is obtained.
In this implementation, the illness that sample includes describing at least two types of same illness describes sample and disease
Disease describes the illness mark of the above-mentioned illness of illness of pattern representation.
As an example, the illness that sample can include text type describes sample (text " red swelling of the skin "), image
Description sample (skin picture clapped) and the illness mark (bite by mosquitos) of type.
Step 403, using sample set, at least two initial first nerves networks and initial nervus opticus network are instructed
Practice, using the initial first nerves network after training as feature extraction network, using the initial nervus opticus network after training as
Fusion Features network.
In this implementation, features described above Fusion Model includes the second full articulamentum, above-mentioned initial nervus opticus network
Including the initial second full articulamentum.Herein, above-mentioned initial second full articulamentum can include at least two nodes.
As an example, training process principle is as follows:Sample is described for the illness of at least two types in sample, respectively
The initial first nerves network of corresponding types is chosen, and is input in corresponding initial first nerves network.Each initial first
The output of neutral net is the input of above-mentioned initial nervus opticus network, by the output of initial nervus opticus network.Utilize loss
Function, by the output of initial nervus opticus network compared with the illness mark of sample, obtains both measuress of dispersion.Using reverse
The mode of propagation, adjusts initial first nerves network and initial nervus opticus network.Herein, the form of loss function is not done
Limit.
In this implementation, step 403 can include:It is complete by above-mentioned initial second using (dropout) method of discarding
The weight zero setting of the part of nodes at least two nodes included by articulamentum.
As an example, as fig. 5 a and fig. 5b, they combine the principle for showing dropout methods.Fig. 5 A are to utilize
Before dropout methods, the connection diagram of full articulamentum;Fig. 5 B are the connections of full articulamentum after utilizing dropout methods
Schematic diagram.In Fig. 5 A, from top to bottom, the second layer and third layer are full articulamentums, under normal circumstances each section of full articulamentum
Point is connected with all nodes of last layer.In Fig. 5 B, the basipetal second layer and third layer, using dropout methods, with
Probability chooses the node (node to cross in figure is dropped) for abandoning full articulamentum, and each the node of full articulamentum is dropped general
Rate is identical.If node is dropped, then in the training process, the weight zero setting of node, equivalent to by this hiding nodes,
This node is not connected with the node of last layer, and also the node not with next layer is connected.
It should be noted that it can prevent Fusion Features model from the problem of over-fitting occur using dropout methods.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, it is used to generate letter this application provides one kind
One embodiment of the device of breath, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in fig. 6, the present embodiment it is above-mentioned be used to generate the device 600 of information and include:Acquiring unit 601, first is given birth to
Into 602 and second generation unit 603 of unit.Wherein, acquiring unit, at least two types of same illness are described for obtaining
Illness description information, wherein, the type of illness description information includes text type, sound-type or image type;First generation
Unit, for for acquired each type of illness description information, the illness description information of this type being imported advance
The Feature Selection Model corresponding with this type established, generates feature letter corresponding with the illness description information of this type
Breath, wherein, Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;Second generation is single
Member, for the characteristic information generated to be imported the Fusion Features model pre-established, generates above-mentioned illness description information and is retouched
The illness stated belongs to the probable value of predefined illness, wherein, features described above Fusion Model is used for characteristic feature information and probability
Correspondence between value.
In the present embodiment, acquiring unit 601, the first generation unit 602 and the second generation unit 603.Specific processing
And its caused technique effect can correspond to the correlation of step 201, step 202 and step 203 in embodiment with reference to figure 2 respectively
Illustrate, details are not described herein
In some optional implementations of the present embodiment, above-mentioned image type includes at least two subgraph types,
Every drawing of seeds picture type that Feature Selection Model corresponding with image type is included in above-mentioned at least two subgraphs type is corresponding
Subgraph Feature Selection Model, wherein, subgraph Feature Selection Model be used for phenogram as between image feature information
Correspondence.
In some optional implementations of the present embodiment, above-mentioned first generation unit, is additionally operable to:In response to determining institute
The illness description information of at least two types obtained includes the illness description information of image type, determines and acquired image
The corresponding subgraph Feature Selection Model of subgraph type, wherein, acquired image is at least two acquired types
Illness description information in image type illness description information;By subgraph feature determined by acquired image importing
Extraction model, generates the image feature information of acquired image.
In some optional implementations of the present embodiment, Feature Selection Model is neural network model, above-mentioned nerve
Network model includes the first full articulamentum;And above-mentioned first generation unit, it is additionally operable to:For acquired each type
Illness description information, the illness description information of this type is imported to the feature extraction corresponding with this type pre-established
Model, it is corresponding with the illness description information of this type by the first full articulamentum output of the Feature Selection Model of this type
Characteristic information.
In some optional implementations of the present embodiment, above-mentioned second generation unit, is additionally operable to:Splicing at least two
The characteristic information of first full articulamentum output of Feature Selection Model;Spliced characteristic information is imported into features described above fusion mould
Type.
In some optional implementations of the present embodiment, Feature Selection Model and features described above Fusion Model are to pass through
What following steps were trained:At least two initial first nerves networks and initial nervus opticus network are obtained, wherein, Ge Gechu
The output of beginning first nerves network for above-mentioned initial nervus opticus network input, wherein, Feature Selection Model is based on initial the
One neutral net obtains, and Fusion Features model is based on above-mentioned initial nervus opticus network and obtains;Sample set is obtained, sample includes retouching
The illness for stating at least two types of same illness describes sample and illness describes illness belonging to the illness of pattern representation
Illness identifies;Using above-mentioned sample set, to above-mentioned at least two initial first nerves networks and above-mentioned initial nervus opticus network
It is trained, using the initial first nerves network after training as Feature Selection Model, by the initial nervus opticus net after training
Network is as Fusion Features model.
In some optional implementations of the present embodiment, features described above Fusion Model includes the second full articulamentum, on
Stating initial nervus opticus network includes the initial second full articulamentum, wherein, above-mentioned initial second full articulamentum includes at least two
Node;And the above-mentioned sample set of above-mentioned utilization, above-mentioned at least two initial first nerves corresponding to features described above extraction model
Network and above-mentioned initial nervus opticus network are trained, including:Using drop device, by above-mentioned initial second full articulamentum institute
Including at least two nodes in part of nodes weight zero setting.
It should be noted that it is provided in this embodiment be used to generate each unit in the device of information realize details and technology
Effect may be referred to the explanation of other embodiments in the application, and details are not described herein.
Below with reference to Fig. 7, it illustrates suitable for for realizing the computer system 700 of the server of the embodiment of the present application
Structure diagram.Server shown in Fig. 7 is only an example, should not be to the function and use scope band of the embodiment of the present application
Carry out any restrictions.
As shown in fig. 7, computer system 700 includes central processing unit (CPU) 701, it can be read-only according to being stored in
Program in memory (ROM) 702 or be loaded into program in random access storage device (RAM) 703 from storage part 708 and
Perform various appropriate actions and processing.In RAM 703, also it is stored with system 700 and operates required various programs and data.
CPU 701, ROM 702 and RAM 703 are connected with each other by bus 704.Input/output (I/O) interface 705 is also connected to always
Line 704.
I/O interfaces 705 are connected to lower component:Importation 706 including keyboard, mouse etc.;Penetrated including such as cathode
The output par, c 707 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage part 708 including hard disk etc.;
And the communications portion 709 of the network interface card including LAN card, modem etc..Communications portion 709 via such as because
The network of spy's net performs communication process.Driver 710 is also according to needing to be connected to I/O interfaces 705.Detachable media 711, such as
Disk, CD, magneto-optic disk, semiconductor memory etc., are installed on driver 710, in order to read from it as needed
Computer program be mounted into as needed storage part 708.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
Apply in example, which can be downloaded and installed by communications portion 709 from network, and/or from detachable media
711 are mounted.When the computer program is performed by central processing unit (CPU) 701, perform what is limited in the present processes
Above-mentioned function.
It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter
The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer-readable recording medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, which, which can send, propagates or transmit, is used for
By instruction execution system, device either device use or program in connection.Included on computer-readable medium
Program code can be transmitted with any appropriate medium, be included but not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that marked at some as in the realization replaced in square frame
The function of note can also be with different from the order marked in attached drawing generation.For example, two square frames succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also to note
Meaning, the combination of each square frame and block diagram in block diagram and/or flow chart and/or the square frame in flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include acquiring unit, the first generation unit and the second generation unit.Wherein, the title of these units is not formed under certain conditions
To the restriction of the unit in itself, for example, acquiring unit is also described as " obtaining at least two types for describing same illness
Illness description information unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine computer-readable recording medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:The illness description information at least two types for describing same illness is obtained, wherein, the type of illness description information includes
Text type, sound-type or image type;For acquired each type of illness description information, by the disease of this type
Disease description information imports the Feature Selection Model corresponding with this type pre-established, and generation and the illness of this type describe
The corresponding characteristic information of information, wherein, Feature Selection Model is used to characterize corresponding between illness description information and characteristic information
Relation;The characteristic information generated is imported to the Fusion Features model pre-established, is generated described by above-mentioned illness description information
Illness belong to the probable value of predefined illness, wherein, features described above Fusion Model is used for characteristic feature information and probable value
Between correspondence.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (16)
1. a kind of method for generating information, including:
The illness description information at least two types for describing same illness is obtained, wherein, the type of illness description information includes
Text type, sound-type or image type;
For acquired each type of illness description information, the illness description information of this type is imported what is pre-established
Feature Selection Model corresponding with this type, generates characteristic information corresponding with the illness description information of this type, wherein,
Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;
The characteristic information generated is imported to the Fusion Features model pre-established, it is described to generate the illness description information
Illness belongs to the probable value of predefined illness, wherein, the Fusion Features model be used for characteristic feature information and probable value it
Between correspondence.
2. according to the method described in claim 1, wherein, described image type includes at least two subgraph types, with image
The corresponding Feature Selection Model of type includes the corresponding subgraph of every drawing of seeds picture type at least two subgraphs type
As Feature Selection Model, wherein, subgraph Feature Selection Model is used for the corresponding pass between phenogram picture and image feature information
System.
3. it is described for acquired each type of illness description information according to the method described in claim 2, wherein, will
The illness description information of this type imports the Feature Selection Model corresponding with this type pre-established, generation and the species
The corresponding characteristic information of illness description information of type, including:
Illness description information in response to determining at least two acquired types includes the illness description information of image type, really
Fixed subgraph Feature Selection Model corresponding with the subgraph type of acquired image, wherein, acquired image is to be obtained
The illness description information of image type in the illness description information of at least two types taken;
Subgraph Feature Selection Model, the characteristics of image for generating acquired image determined by acquired image importing are believed
Breath.
4. according to the method described in claim 1, wherein, Feature Selection Model is neural network model, the neutral net mould
Type includes the first full articulamentum;And
It is described for acquired each type of illness description information, the illness description information of this type is imported and is built in advance
Vertical Feature Selection Model corresponding with this type, generates characteristic information corresponding with the illness description information of this type,
Including:
For acquired each type of illness description information, the illness description information of this type is imported what is pre-established
Feature Selection Model corresponding with this type, by the first full articulamentum output of the Feature Selection Model of this type and this kind
The corresponding characteristic information of illness description information of type.
5. according to the method described in claim 4, wherein, the feature pre-established that the characteristic information generated is imported is melted
Molding type, generates the probable value that the described illness of illness description information belongs to predefined illness, including:
Splice the characteristic information of the first full articulamentum output of at least two Feature Selection Models;
Spliced characteristic information is imported into the Fusion Features model.
6. according to the method any one of claim 1-5, wherein, Feature Selection Model and the Fusion Features model are
Trained by following steps:
At least two initial first nerves networks and initial nervus opticus network are obtained, wherein, each initial first nerves network
Output for the initial nervus opticus network input, wherein, Feature Selection Model is obtained based on initial first nerves network,
Fusion Features model is based on the initial nervus opticus network and obtains;
Sample set is obtained, the illness of at least two types of the sample including describing same illness describes sample and illness describes sample
The illness mark of illness belonging to the illness of this description;
Using the sample set, described at least two initial first nerves networks and the initial nervus opticus network are instructed
Practice, using the initial first nerves network after training as Feature Selection Model, using the initial nervus opticus network after training as
Fusion Features model.
7. according to the method described in claim 6, wherein, the Fusion Features model includes the second full articulamentum, described initial
Nervus opticus network includes the initial second full articulamentum, wherein, the initial second full articulamentum includes at least two nodes;With
And
It is described utilize the sample set, at least two initial first nerves network corresponding to the Feature Selection Model and
The initial nervus opticus network is trained, including:
Using discarding method, by the weight of the part of nodes at least two nodes included by the initial second full articulamentum
Zero setting.
8. a kind of device for being used to generate information, including:
Acquiring unit, for obtaining the illness description information at least two types for describing same illness, wherein, illness description letter
The type of breath includes text type, sound-type or image type;
First generation unit, for for acquired each type of illness description information, the illness of this type to be described
Information imports the Feature Selection Model corresponding with this type pre-established, generation and the illness description information pair of this type
The characteristic information answered, wherein, Feature Selection Model is used to characterize the correspondence between illness description information and characteristic information;
Second generation unit, for the characteristic information generated to be imported the Fusion Features model pre-established, generates the disease
The described illness of disease description information belongs to the probable value of predefined illness, wherein, the Fusion Features model is used to characterize
Correspondence between characteristic information and probable value.
9. device according to claim 8, wherein, described image type includes at least two subgraph types, with image
The corresponding Feature Selection Model of type includes the corresponding subgraph of every drawing of seeds picture type at least two subgraphs type
As Feature Selection Model, wherein, subgraph Feature Selection Model is used for the corresponding pass between phenogram picture and image feature information
System.
10. device according to claim 9, wherein, first generation unit, is additionally operable to:
Illness description information in response to determining at least two acquired types includes the illness description information of image type, really
Fixed subgraph Feature Selection Model corresponding with the subgraph type of acquired image, wherein, acquired image is to be obtained
The illness description information of image type in the illness description information of at least two types taken;
Subgraph Feature Selection Model, the characteristics of image for generating acquired image determined by acquired image importing are believed
Breath.
11. device according to claim 8, wherein, Feature Selection Model is neural network model, the neutral net mould
Type includes the first full articulamentum;And
First generation unit, is additionally operable to:For acquired each type of illness description information, by the disease of this type
Disease description information imports the Feature Selection Model corresponding with this type pre-established, by the Feature Selection Model of this type
The first full articulamentum export characteristic information corresponding with the illness description information of this type.
12. according to the devices described in claim 11, wherein, second generation unit, is additionally operable to:
Splice the characteristic information of the first full articulamentum output of at least two Feature Selection Models;
Spliced characteristic information is imported into the Fusion Features model.
13. according to the device any one of claim 8-12, wherein, Feature Selection Model and the Fusion Features model
Through the following steps that what training obtained:
At least two initial first nerves networks and initial nervus opticus network are obtained, wherein, each initial first nerves network
Output for the initial nervus opticus network input, wherein, Feature Selection Model is obtained based on initial first nerves network,
Fusion Features model is based on the initial nervus opticus network and obtains;
Sample set is obtained, the illness of at least two types of the sample including describing same illness describes sample and illness describes sample
The illness mark of illness belonging to the illness of this description;Using the sample set, to described at least two initial first nerves nets
Network and the initial nervus opticus network are trained, using the initial first nerves network after training as Feature Selection Model,
Using the initial nervus opticus network after training as Fusion Features model.
14. device according to claim 13, wherein, the Fusion Features model includes the second full articulamentum, described first
Beginning nervus opticus network includes the initial second full articulamentum, wherein, the initial second full articulamentum includes at least two nodes;
And
It is described utilize the sample set, at least two initial first nerves network corresponding to the Feature Selection Model and
The initial nervus opticus network is trained, including:
Using discarding method, by the weight of the part of nodes at least two nodes included by the initial second full articulamentum
Zero setting.
15. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors
Realize the method as described in any in claim 1-7.
16. a kind of computer-readable recording medium, is stored thereon with computer program, wherein, when which is executed by processor
Realize the method as described in any in claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242792.3A CN107910060A (en) | 2017-11-30 | 2017-11-30 | Method and apparatus for generating information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242792.3A CN107910060A (en) | 2017-11-30 | 2017-11-30 | Method and apparatus for generating information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107910060A true CN107910060A (en) | 2018-04-13 |
Family
ID=61849373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711242792.3A Pending CN107910060A (en) | 2017-11-30 | 2017-11-30 | Method and apparatus for generating information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107910060A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664948A (en) * | 2018-05-21 | 2018-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN110033019A (en) * | 2019-03-06 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Method for detecting abnormality, device and the storage medium of human body |
CN110059201A (en) * | 2019-04-19 | 2019-07-26 | 杭州联汇科技股份有限公司 | A kind of across media program feature extracting method based on deep learning |
CN110504026A (en) * | 2018-05-18 | 2019-11-26 | 宏达国际电子股份有限公司 | Control method and medical system |
CN110838363A (en) * | 2018-08-16 | 2020-02-25 | 宏达国际电子股份有限公司 | Control method and medical system |
CN112289441A (en) * | 2020-11-19 | 2021-01-29 | 吾征智能技术(北京)有限公司 | Multimode-based medical biological characteristic information matching system |
CN112731558A (en) * | 2020-12-16 | 2021-04-30 | 中国科学技术大学 | Joint inversion method and device for seismic surface wave and receiving function |
CN112885334A (en) * | 2021-01-18 | 2021-06-01 | 吾征智能技术(北京)有限公司 | Disease recognition system, device, storage medium based on multi-modal features |
CN113763532A (en) * | 2021-04-19 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Human-computer interaction method, device, equipment and medium based on three-dimensional virtual object |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107145746A (en) * | 2017-05-09 | 2017-09-08 | 北京大数医达科技有限公司 | The intelligent analysis method and system of a kind of state of an illness description |
CN107247881A (en) * | 2017-06-20 | 2017-10-13 | 北京大数医达科技有限公司 | A kind of multi-modal intelligent analysis method and system |
CN107346369A (en) * | 2017-05-11 | 2017-11-14 | 北京紫宸正阳科技有限公司 | A kind of medical information processing method and device |
-
2017
- 2017-11-30 CN CN201711242792.3A patent/CN107910060A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107145746A (en) * | 2017-05-09 | 2017-09-08 | 北京大数医达科技有限公司 | The intelligent analysis method and system of a kind of state of an illness description |
CN107346369A (en) * | 2017-05-11 | 2017-11-14 | 北京紫宸正阳科技有限公司 | A kind of medical information processing method and device |
CN107247881A (en) * | 2017-06-20 | 2017-10-13 | 北京大数医达科技有限公司 | A kind of multi-modal intelligent analysis method and system |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110504026B (en) * | 2018-05-18 | 2022-07-26 | 宏达国际电子股份有限公司 | Control method and medical system |
US11600387B2 (en) | 2018-05-18 | 2023-03-07 | Htc Corporation | Control method and reinforcement learning for medical system |
CN110504026A (en) * | 2018-05-18 | 2019-11-26 | 宏达国际电子股份有限公司 | Control method and medical system |
CN108664948A (en) * | 2018-05-21 | 2018-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN108664948B (en) * | 2018-05-21 | 2022-12-27 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN110838363B (en) * | 2018-08-16 | 2023-02-21 | 宏达国际电子股份有限公司 | Control method and medical system |
CN110838363A (en) * | 2018-08-16 | 2020-02-25 | 宏达国际电子股份有限公司 | Control method and medical system |
CN110033019B (en) * | 2019-03-06 | 2021-07-27 | 腾讯科技(深圳)有限公司 | Method and device for detecting abnormality of human body part and storage medium |
CN110033019A (en) * | 2019-03-06 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Method for detecting abnormality, device and the storage medium of human body |
CN110059201A (en) * | 2019-04-19 | 2019-07-26 | 杭州联汇科技股份有限公司 | A kind of across media program feature extracting method based on deep learning |
CN112289441A (en) * | 2020-11-19 | 2021-01-29 | 吾征智能技术(北京)有限公司 | Multimode-based medical biological characteristic information matching system |
CN112289441B (en) * | 2020-11-19 | 2024-03-22 | 吾征智能技术(北京)有限公司 | Medical biological feature information matching system based on multiple modes |
CN112731558A (en) * | 2020-12-16 | 2021-04-30 | 中国科学技术大学 | Joint inversion method and device for seismic surface wave and receiving function |
CN112885334A (en) * | 2021-01-18 | 2021-06-01 | 吾征智能技术(北京)有限公司 | Disease recognition system, device, storage medium based on multi-modal features |
CN113763532A (en) * | 2021-04-19 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Human-computer interaction method, device, equipment and medium based on three-dimensional virtual object |
CN113763532B (en) * | 2021-04-19 | 2024-01-19 | 腾讯科技(深圳)有限公司 | Man-machine interaction method, device, equipment and medium based on three-dimensional virtual object |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107910060A (en) | Method and apparatus for generating information | |
CN108038469B (en) | Method and apparatus for detecting human body | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN107578017A (en) | Method and apparatus for generating image | |
US20200410732A1 (en) | Method and apparatus for generating information | |
CN107633218A (en) | Method and apparatus for generating image | |
CN107644209A (en) | Method for detecting human face and device | |
CN107728780A (en) | A kind of man-machine interaction method and device based on virtual robot | |
CN108427939A (en) | model generating method and device | |
CN108022586A (en) | Method and apparatus for controlling the page | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN108830235A (en) | Method and apparatus for generating information | |
CN109086719A (en) | Method and apparatus for output data | |
CN108985257A (en) | Method and apparatus for generating information | |
CN111275784B (en) | Method and device for generating image | |
US20190087683A1 (en) | Method and apparatus for outputting information | |
CN107609506A (en) | Method and apparatus for generating image | |
CN109410253B (en) | For generating method, apparatus, electronic equipment and the computer-readable medium of information | |
CN109829432A (en) | Method and apparatus for generating information | |
CN108363999A (en) | Operation based on recognition of face executes method and apparatus | |
CN107729928A (en) | Information acquisition method and device | |
CN109241934A (en) | Method and apparatus for generating information | |
CN108511066A (en) | information generating method and device | |
CN108960110A (en) | Method and apparatus for generating information | |
CN109887077A (en) | Method and apparatus for generating threedimensional model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180413 |
|
RJ01 | Rejection of invention patent application after publication |