CN113610127A - Genetic crossing algorithm-based image feature fusion method and system - Google Patents

Genetic crossing algorithm-based image feature fusion method and system Download PDF

Info

Publication number
CN113610127A
CN113610127A CN202110841757.3A CN202110841757A CN113610127A CN 113610127 A CN113610127 A CN 113610127A CN 202110841757 A CN202110841757 A CN 202110841757A CN 113610127 A CN113610127 A CN 113610127A
Authority
CN
China
Prior art keywords
target
training level
description
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110841757.3A
Other languages
Chinese (zh)
Inventor
费晓霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai DC Science Co Ltd
Original Assignee
Shanghai DC Science Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai DC Science Co Ltd filed Critical Shanghai DC Science Co Ltd
Priority to CN202110841757.3A priority Critical patent/CN113610127A/en
Publication of CN113610127A publication Critical patent/CN113610127A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

According to the image feature fusion method and system based on the genetic crossover algorithm, the description data of the standard feature training layer is updated according to the processing feature training layer corresponding to the adjacent scene interactive image to be selected, the adjacent scene interactive image to be selected configures the description content of the target object, the description data of the target feature training layer is selected and obtained based on the updating result in the description data of the feature training layer and the description data of the standard feature training layer, and the target scene interactive image is processed through the target feature training layer corresponding to the description data of the target feature training layer to obtain the fusion result. Therefore, each time the scene interaction image is acquired, the target feature training level for processing the current scene interaction image is searched in the target data processing terminal. The problem of wrong feature training aspect for processing scene interaction images caused by disorder of the training texts in a plurality of images is avoided, and the precision of image fusion processing is guaranteed.

Description

Genetic crossing algorithm-based image feature fusion method and system
Technical Field
The application relates to the technical field of image fusion, in particular to an image feature fusion method and system based on a genetic crossing algorithm.
Background
In recent years, equipment for shooting images is continuously updated, income of people is continuously improved, people shoot the images by using the shooting equipment to leave important memories and important information of life for keeping, so that the shot images need to be processed, important features are kept, the same features are fused, and resources can be effectively kept. There are some drawbacks in the process of data fusion.
Disclosure of Invention
In view of this, the present application provides an image feature fusion method and system based on a genetic crossover algorithm.
In a first aspect, an image feature fusion method based on a genetic crossing algorithm is provided, the method comprising:
acquiring a target scene interaction image;
the target scene interaction image configures target object description content;
obtaining the description data of the characteristic training level corresponding to the description content of the target object, and obtaining the description data of the standard characteristic training level corresponding to the description content of the target object from a target data processing terminal;
the description data of the standard feature training level is updated according to the processing feature training level corresponding to the close scene interactive image to be selected, and the close scene interactive image to be selected configures the description content of the target object;
selecting and obtaining the description data of the target feature training level based on the description data of the feature training level and the updating result in the description data of the standard feature training level;
and fusing the target scene interaction image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result.
Further, the obtaining of the description data of the feature training level corresponding to the description content of the target object includes:
obtaining a description data training text of a characteristic training layer; the description data training text of the characteristic training level comprises object description content training information corresponding to various characteristic training level states respectively;
determining a feature training level state corresponding to the target object description content based on the correlation result of the target object description content and the training information of each object description content;
and obtaining the description data of the feature training level based on the current template of the description data training text of the feature training level and the state of the feature training level.
Further, the obtaining of the description data of the standard feature training level corresponding to the description content of the target object from the target data processing terminal includes:
performing projection processing on the target object description content to obtain target object projection description content;
determining the target data processing terminal from each data processing terminal to be selected based on the target object projection description content and the correlation result of the object projection description content vectors corresponding to the plurality of data processing terminals to be selected;
and determining the description data of the standard feature training level from the description data of the feature training level to be selected corresponding to the description content of each object to be selected in the target data processing terminal based on the correlation result between the description content of the target object and the description content of each object to be selected in the target data processing terminal.
Further, the updating result includes a state template, and the selecting the description data of the target feature training level based on the updating result in the description data of the feature training level and the description data of the standard feature training level includes:
selecting the description data of the feature training level corresponding to the optimal state template from the description data of the feature training level and the description data of the standard feature training level as the description data of the target feature training level;
wherein the method further comprises:
when the state template of the description data of the feature training level is updated, generating a description data updating interactive image of the feature training level according to the description content of the target object and the description data of the feature training level;
and sending the description data updating interactive image of the characteristic training level to the target data processing terminal so that the target data processing terminal updates the interactive image according to the description data of the characteristic training level and updates the description data of the standard characteristic training level into the description data of the characteristic training level.
Further, the fusion processing of the target scene interaction image through the target feature training layer corresponding to the description data of the target feature training layer to obtain a fusion result includes:
searching an interactive image state of the interactive image of the scene to be selected corresponding to the description content of the target object;
when the interactive image state of at least one scene interactive image to be selected is not reached, the target scene interactive image is processed in a fusion mode through the processing characteristic training layer, and a fusion result is obtained;
when the interactive image states of all scene interactive images to be selected are reached, fusing and processing the target scene interactive images through the target feature training layer to obtain a fusion result;
when the target feature training level is a standard feature training level, the target scene interaction image is processed by fusing the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result, including:
acquiring a standard image feature extraction method from the standard feature training level;
processing the target scene interactive image based on the standard image feature extraction method to obtain the fusion result;
the target scene interactive image configuring target image description content, when the target feature training level is a special feature training level, the special feature training level storing a candidate image feature extraction method corresponding to each of a plurality of candidate image description contents, and processing the target scene interactive image through the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result, includes:
acquiring a target image feature extraction method corresponding to the target image description content from the special feature training level;
and processing the target scene interaction image based on the target image feature extraction method to obtain the fusion result.
In a second aspect, an image feature fusion system based on a genetic crossing algorithm is provided, which includes a data acquisition end and a data processing terminal, where the data acquisition end is in communication connection with the data processing terminal, and the data processing terminal is specifically configured to:
acquiring a target scene interaction image;
the target scene interaction image configures target object description content;
obtaining the description data of the characteristic training level corresponding to the description content of the target object, and obtaining the description data of the standard characteristic training level corresponding to the description content of the target object from a target data processing terminal;
the description data of the standard feature training level is updated according to the processing feature training level corresponding to the close scene interactive image to be selected, and the close scene interactive image to be selected configures the description content of the target object;
selecting and obtaining the description data of the target feature training level based on the description data of the feature training level and the updating result in the description data of the standard feature training level;
and fusing the target scene interaction image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result.
Further, the data processing terminal is specifically configured to:
obtaining a description data training text of a characteristic training layer; the description data training text of the characteristic training level comprises object description content training information corresponding to various characteristic training level states respectively;
determining a feature training level state corresponding to the target object description content based on the correlation result of the target object description content and the training information of each object description content;
and obtaining the description data of the feature training level based on the current template of the description data training text of the feature training level and the state of the feature training level.
Further, the data processing terminal is specifically configured to:
performing projection processing on the target object description content to obtain target object projection description content;
determining the target data processing terminal from each data processing terminal to be selected based on the target object projection description content and the correlation result of the object projection description content vectors corresponding to the plurality of data processing terminals to be selected;
and determining the description data of the standard feature training level from the description data of the feature training level to be selected corresponding to the description content of each object to be selected in the target data processing terminal based on the correlation result between the description content of the target object and the description content of each object to be selected in the target data processing terminal.
Further, the data processing terminal is specifically configured to:
selecting the description data of the feature training level corresponding to the optimal state template from the description data of the feature training level and the description data of the standard feature training level as the description data of the target feature training level;
wherein the data processing terminal is further configured to:
when the state template of the description data of the feature training level is updated, generating a description data updating interactive image of the feature training level according to the description content of the target object and the description data of the feature training level;
and sending the description data updating interactive image of the characteristic training level to the target data processing terminal so that the target data processing terminal updates the interactive image according to the description data of the characteristic training level and updates the description data of the standard characteristic training level into the description data of the characteristic training level.
Further, the data processing terminal is specifically configured to:
searching an interactive image state of the interactive image of the scene to be selected corresponding to the description content of the target object;
when the interactive image state of at least one scene interactive image to be selected is not reached, the target scene interactive image is processed in a fusion mode through the processing characteristic training layer, and a fusion result is obtained;
when the interactive image states of all scene interactive images to be selected are reached, fusing and processing the target scene interactive images through the target feature training layer to obtain a fusion result;
wherein the data processing terminal is specifically configured to:
acquiring a standard image feature extraction method from the standard feature training level;
processing the target scene interactive image based on the standard image feature extraction method to obtain the fusion result;
wherein the data processing terminal is specifically configured to:
acquiring a target image feature extraction method corresponding to the target image description content from a special feature training level;
and processing the target scene interaction image based on the target image feature extraction method to obtain the fusion result.
According to the image feature fusion method and system based on the genetic crossing algorithm, the target scene interaction image is obtained; configuring target object description content by a target scene interactive image, acquiring description data of a characteristic training layer corresponding to the target object description content from a target data processing terminal, updating the description data of the standard characteristic training layer according to a processing characteristic training layer corresponding to a near to-be-selected scene interactive image, configuring the target object description content by the near to-be-selected scene interactive image, selecting the description data of the target characteristic training layer based on an updating result in the description data of the characteristic training layer and the description data of the standard characteristic training layer, and processing the target scene interactive image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result. Therefore, each time the scene interaction image is acquired, the target feature training level for processing the current scene interaction image is searched in the target data processing terminal. The description data of the target feature training level is determined through comparison of the description data of the feature training level and the description data of the standard feature training level, so that once a scene interactive image is suddenly initiated by a certain object, if the description data of the feature training level is changed, all subsequent scene interactive images uniformly take the changed description data of the optimal target feature training level as the reference, the problem that the feature training level for processing the scene interactive image is wrong due to disorder of a training text in a plurality of images is avoided, and the precision of image fusion processing is guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of an image feature fusion method based on a genetic crossover algorithm according to an embodiment of the present application.
Fig. 2 is a block diagram of an image feature fusion apparatus based on a genetic crossover algorithm according to an embodiment of the present application.
Fig. 3 is an architecture diagram of an image feature fusion system based on a genetic crossover algorithm according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Referring to fig. 1, an image feature fusion method based on genetic crossing algorithm is shown, which may include the technical solutions described in the following steps 100 to 500.
And step 100, acquiring a target scene interaction image.
Illustratively, the target scene interaction image is used to characterize the computed image of the genetic crossover algorithm.
And 200, configuring the description content of the target object by the target scene interactive image.
Illustratively, the target object description is used to characterize features in the interactive image of the target scene.
Step 300, obtaining the description data of the feature training level corresponding to the description content of the target object, and obtaining the description data of the standard feature training level corresponding to the description content of the target object from the target data processing terminal.
And 400, updating the description data of the standard feature training level according to the processing feature training level corresponding to the close scene interactive image to be selected, wherein the close scene interactive image to be selected configures the description content of the target object.
And 500, selecting and obtaining the description data of the target feature training level based on the updating results in the description data of the feature training level and the description data of the standard feature training level.
And 600, fusing and processing the target scene interaction image through a target feature training layer corresponding to the description data of the target feature training layer to obtain a fusion result.
It can be understood that, when the technical solutions described in the above steps 100 to 600 are performed, the target scene is interacted with the image by acquiring the target scene; configuring target object description content by a target scene interactive image, acquiring description data of a characteristic training layer corresponding to the target object description content from a target data processing terminal, updating the description data of the standard characteristic training layer according to a processing characteristic training layer corresponding to a near to-be-selected scene interactive image, configuring the target object description content by the near to-be-selected scene interactive image, selecting the description data of the target characteristic training layer based on an updating result in the description data of the characteristic training layer and the description data of the standard characteristic training layer, and processing the target scene interactive image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result. Therefore, each time the scene interaction image is acquired, the target feature training level for processing the current scene interaction image is searched in the target data processing terminal. The description data of the target feature training level is determined through comparison of the description data of the feature training level and the description data of the standard feature training level, so that once a scene interactive image is suddenly initiated by a certain object, if the description data of the feature training level is changed, all subsequent scene interactive images uniformly take the changed description data of the optimal target feature training level as the reference, the problem that the feature training level for processing the scene interactive image is wrong due to disorder of a training text in a plurality of images is avoided, and the precision of image fusion processing is guaranteed.
In an alternative embodiment, the inventor finds that, when obtaining the description data of the feature training level corresponding to the description content of the target object, there is a problem that the description data training text is inaccurate, so that it is difficult to accurately obtain the description data of the feature training level corresponding to the description content of the target object, and in order to improve the above technical problem, the step of obtaining the description data of the feature training level corresponding to the description content of the target object, which is described in step 300, may specifically include the technical solutions described in the following step q 1-step q 3.
Step q1, obtaining a description data training text of the feature training layer; the description data training text of the characteristic training level comprises object description content training information corresponding to various characteristic training level states respectively.
And q2, determining the feature training level state corresponding to the target object description content based on the correlation result of the target object description content and the training information of each object description content.
And q3, obtaining the description data of the feature training level based on the current template of the description data training text of the feature training level and the state of the feature training level.
It can be understood that when the technical solutions described in the above steps q 1-q 3 are executed, when the description data of the feature training level corresponding to the description content of the target object is obtained, the problem that the training text of the description data is inaccurate is solved, so that the description data of the feature training level corresponding to the description content of the target object can be accurately obtained.
In an alternative embodiment, the inventor finds that, when obtaining the description data of the standard feature training level corresponding to the description content of the target object from the target data processing terminal, there is a problem that the projection description content of the target object is unreliable, so that it is difficult to reliably obtain the description data of the standard feature training level corresponding to the description content of the target object, and in order to improve the above technical problem, the step of obtaining the description data of the standard feature training level corresponding to the description content of the target object from the target data processing terminal, which is described in step 300, may specifically include the technical solutions described in the following step w 1-step w 3.
And step w1, performing projection processing on the target object description content to obtain target object projection description content.
And step w2, determining the target data processing terminal from each data processing terminal to be selected based on the correlation result between the target object projection description content and the object projection description content vector corresponding to the plurality of data processing terminals to be selected.
Step w3, based on the result of the association between the target object description content and each candidate object description content in the target data processing terminal, determining the description data of the standard feature training level from the description data of the candidate feature training level corresponding to each candidate object description content in the target data processing terminal.
It can be understood that when the technical solutions described in the above steps w 1-w 3 are executed, and the description data of the standard feature training level corresponding to the description content of the target object is acquired from the target data processing terminal, the problem that the projection description content of the target object is unreliable is solved, so that the description data of the standard feature training level corresponding to the description content of the target object can be reliably acquired.
In an alternative embodiment, the inventor finds that when the update result includes a state template, and the update result based on the description data of the feature training level and the description data of the standard feature training level is incomplete, so that it is difficult to completely select the description data of the target feature training level, in order to improve the above technical problem, the update result described in step 500 includes a state template, and the step of selecting the description data of the target feature training level based on the update result of the description data of the feature training level and the description data of the standard feature training level may specifically include the technical solution described in step e1 below.
And e1, selecting the description data of the feature training level corresponding to the optimal state template from the description data of the feature training level and the description data of the standard feature training level as the description data of the target feature training level.
It can be understood that, when the technical solution described in the above step e1 is executed, the updated result includes a state template, and the problem of incomplete description data is solved when the updated result is based on the description data of the feature training level and the description data of the standard feature training level, so that the description data of the target feature training level can be completely selected.
Based on the above basis, the following technical solutions described in step r1 and step r2 can also be included.
And r1, when the state template of the description data of the feature training level is updated, generating a description data update interactive image of the feature training level according to the description content of the target object and the description data of the feature training level.
And r2, sending the description data updating interactive image of the characteristic training level to the target data processing terminal, so that the target data processing terminal updates the interactive image according to the description data of the characteristic training level, and updates the description data of the standard characteristic training level into the description data of the characteristic training level.
It can be understood that, when the technical solutions described in the above steps r1 and r2 are executed, the accuracy of updating the description data at the feature training level is improved through continuous updating.
In an alternative embodiment, the inventor finds that, when the target scene interaction image is processed through the target feature training level fusion corresponding to the description data of the target feature training level, there is a problem that the state of the interaction image is not accurate, so that it is difficult to accurately obtain a fusion result, and in order to improve the above technical problem, the step of processing the target scene interaction image through the target feature training level fusion corresponding to the description data of the target feature training level to obtain the fusion result described in step 600 may specifically include the technical solutions described in the following steps t 1-t 3.
And step t1, searching the interactive image state of the interactive image of the scene to be selected corresponding to the description content of the target object.
And t2, when the interactive image state of at least one scene interactive image to be selected is not reached, fusing and processing the target scene interactive image through the processing feature training layer to obtain a fusion result.
And t3, when the interactive image states of all the scene interactive images to be selected are the reached standard, fusing the target scene interactive images through the target feature training layer to obtain a fusion result.
It can be understood that, when the technical solutions described in the above steps t 1-t 3 are executed, and the target scene interactive image is processed by fusing the target feature training levels corresponding to the description data of the target feature training levels, the problem of inaccuracy of the interactive image state is improved, so that the fusion result can be accurately obtained.
In an alternative embodiment, the inventors found that, when the target feature training level is a standard feature training level, and the target feature training level corresponding to the description data of the target feature training level is fused to process the target scene interaction image, there is a problem that a standard image feature extraction method is inaccurate, so that it is difficult to accurately obtain a fusion result, and in order to improve the above technical problem, the step of fusing the target scene interaction image by the target feature training level corresponding to the description data of the target feature training level, which is described in step 600, when the target feature training level is a standard feature training level, to obtain a fusion result may specifically include the technical solutions described in the following steps y1 and y 2.
And step y1, acquiring a standard image feature extraction method from the standard feature training level.
And y2, processing the target scene interaction image based on the standard image feature extraction method to obtain the fusion result.
It can be understood that, when the technical solutions described in the above steps y1 and y2 are executed, when the target feature training level is a standard feature training level, and the target scene interactive image is processed by fusing the target feature training level corresponding to the description data of the target feature training level, the problem of inaccuracy of the standard image feature extraction method is improved, so that the fusion result can be accurately obtained.
In an alternative embodiment, the inventors found that, when the target feature training level is a dedicated feature training level, the dedicated feature training level stores candidate image feature extraction methods corresponding to a plurality of candidate image description contents, and when the target feature training level is a dedicated feature training level, and the target scene interactive image is processed by the target feature training level corresponding to the description data of the target feature training level, there is a problem that the target image feature extraction method corresponding to the target image description content is not accurate, so that it is difficult to accurately obtain a fusion result, in order to improve the above technical problem, the target scene interactive image described in step 600 configures target image description contents, and when the target feature training level is a dedicated feature training level, the dedicated feature training level stores candidate image feature extraction methods corresponding to a plurality of candidate image description contents, the step of processing the target scene interaction image through the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result may specifically include the technical solutions described in the following step u1 and step u 2.
And u1, acquiring a target image feature extraction method corresponding to the target image description content from the special feature training level.
And u2, processing the target scene interaction image based on the target image feature extraction method to obtain the fusion result.
It can be understood that, when the technical solutions described in the above steps u1 and u2 are executed, the target scene interactive image configures target image description content, when the target feature training level is a special feature training level, the special feature training level stores candidate image feature extraction methods corresponding to a plurality of candidate image description contents respectively, and when the target scene interactive image is processed by the target feature training level corresponding to the description data of the target feature training level, the problem that the target image feature extraction method corresponding to the target image description content is not accurate is solved, so that a fusion result can be accurately obtained.
In a possible embodiment, the inventor finds that, when the current image feature extraction method includes the standard image feature extraction method or the target image feature extraction method, and when the target scene interactive image is processed based on the current image feature extraction method, there is a problem that the to-be-selected image processing training model corresponding to the description content of the target object is not accurate, so that it is difficult to accurately obtain the fusion result, in order to improve the above technical problem, the current image feature extraction method described in step 600 includes the standard image feature extraction method or the target image feature extraction method, and the step of obtaining the fusion result by processing the target scene interactive image based on the current image feature extraction method may specifically include the technical solutions described in the following steps i1 and i 2.
Step i1, when the processing feature training level is different from the target feature training level, obtaining a candidate image processing training model corresponding to the target object description content from the processing feature training level.
Step i2, sending the to-be-selected image processing training model to the target feature training level, so that the target feature training level processes the target scene interaction image based on the to-be-selected image processing training model and the current image feature extraction method, and obtaining the fusion result.
It can be understood that, when the technical solutions described in steps i1 and i2 are executed, the current image feature extraction method includes the standard image feature extraction method or the target image feature extraction method, and when the target scene interaction image is processed based on the current image feature extraction method, the problem that the to-be-selected image processing training model corresponding to the description content of the target object is inaccurate is solved, so that the fusion result can be accurately obtained.
In a possible embodiment, the inventor finds that, when the candidate image processing training model is sent to the target processing layer, so that the target processing layer processes the target scene interaction image based on the candidate image processing training model and the current image feature extraction method, there is a problem that target image information is inaccurate, so that it is difficult to accurately obtain the fusion result, and in order to improve the above technical problem, the step of sending the candidate image processing training model to the target processing layer, so that the target processing layer processes the target scene interaction image based on the candidate image processing training model and the current image feature extraction method, so as to obtain the fusion result, described in step i2, may specifically include the technical solutions described in the following step o 1-step o 4.
And step o1, acquiring target image information from the target scene interaction image.
And step o2, obtaining a target image processing training model based on the to-be-selected image processing training model and the target image information.
And step o3, acquiring a standard image processing training model, and determining standard image information based on the standard image processing training model and the target image processing training model.
Step o4, obtaining the fusion result based on the target image information and the standard image information.
It can be understood that, when the technical solutions described in steps o 1-o 4 are executed, the candidate image processing training model is sent to the target processing layer, so that when the target processing layer processes the target scene interaction image based on the candidate image processing training model and the current image feature extraction method, the problem of inaccurate target image information is solved, and the fusion result can be accurately obtained.
On the basis, please refer to fig. 2 in combination, there is provided an image feature fusion apparatus 200 based on a genetic crossover algorithm, applied to a data processing terminal, the apparatus including:
an image obtaining module 210, configured to obtain a target scene interaction image;
a content configuration module 220, configured to configure a target object description content for the target scene interaction image;
a data obtaining module 230, configured to obtain, from a target data processing terminal, description data of a standard feature training level corresponding to the description content of the target object when obtaining the description data of the feature training level corresponding to the description content of the target object;
a content description module 240, configured to update description data of the standard feature training level according to a processing feature training level corresponding to a close scene interaction image to be selected, where the close scene interaction image to be selected configures the target object description content;
a result updating module 250, configured to select and obtain the description data of the target feature training level based on an updating result in the description data of the feature training level and the description data of the standard feature training level;
and the result fusion module 260 is configured to perform fusion processing on the target scene interaction image through a target feature training layer corresponding to the description data of the target feature training layer to obtain a fusion result.
On the basis of the above, please refer to fig. 3, which shows an image feature fusion system 300 based on a genetic crossing algorithm, comprising a processor 310 and a memory 320, which are communicated with each other, wherein the processor 310 is configured to read a computer program from the memory 320 and execute the computer program to implement the above method.
On the basis of the above, there is also provided a computer-readable storage medium on which a computer program is stored, which when executed implements the above-described method.
In conclusion, based on the scheme, the target scene interactive image is obtained; configuring target object description content by a target scene interactive image, acquiring description data of a characteristic training layer corresponding to the target object description content from a target data processing terminal, updating the description data of the standard characteristic training layer according to a processing characteristic training layer corresponding to a near to-be-selected scene interactive image, configuring the target object description content by the near to-be-selected scene interactive image, selecting the description data of the target characteristic training layer based on an updating result in the description data of the characteristic training layer and the description data of the standard characteristic training layer, and processing the target scene interactive image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result. Therefore, each time the scene interaction image is acquired, the target feature training level for processing the current scene interaction image is searched in the target data processing terminal. The description data of the target feature training level is determined through comparison of the description data of the feature training level and the description data of the standard feature training level, so that once a scene interactive image is suddenly initiated by a certain object, if the description data of the feature training level is changed, all subsequent scene interactive images uniformly take the changed description data of the optimal target feature training level as the reference, the problem that the feature training level for processing the scene interactive image is wrong due to disorder of a training text in a plurality of images is avoided, and the precision of image fusion processing is guaranteed.
It should be appreciated that the system and its modules shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the numbers allow for adaptive variation. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image feature fusion method based on a genetic crossing algorithm is characterized by comprising the following steps:
acquiring a target scene interaction image;
the target scene interaction image configures target object description content;
obtaining the description data of the characteristic training level corresponding to the description content of the target object, and obtaining the description data of the standard characteristic training level corresponding to the description content of the target object from a target data processing terminal;
the description data of the standard feature training level is updated according to the processing feature training level corresponding to the close scene interactive image to be selected, and the close scene interactive image to be selected configures the description content of the target object;
selecting and obtaining the description data of the target feature training level based on the description data of the feature training level and the updating result in the description data of the standard feature training level;
and fusing the target scene interaction image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result.
2. The method according to claim 1, wherein the obtaining of the description data of the feature training level corresponding to the description content of the target object comprises:
obtaining a description data training text of a characteristic training layer; the description data training text of the characteristic training level comprises object description content training information corresponding to various characteristic training level states respectively;
determining a feature training level state corresponding to the target object description content based on the correlation result of the target object description content and the training information of each object description content;
and obtaining the description data of the feature training level based on the current template of the description data training text of the feature training level and the state of the feature training level.
3. The method according to claim 1, wherein the obtaining of the description data of the standard feature training level corresponding to the description content of the target object from the target data processing terminal includes:
performing projection processing on the target object description content to obtain target object projection description content;
determining the target data processing terminal from each data processing terminal to be selected based on the target object projection description content and the correlation result of the object projection description content vectors corresponding to the plurality of data processing terminals to be selected;
and determining the description data of the standard feature training level from the description data of the feature training level to be selected corresponding to the description content of each object to be selected in the target data processing terminal based on the correlation result between the description content of the target object and the description content of each object to be selected in the target data processing terminal.
4. The method of claim 1, wherein the update result comprises a state template, and the selecting the description data of the target feature training level based on the update result in the description data of the feature training level and the description data of the standard feature training level comprises:
selecting the description data of the feature training level corresponding to the optimal state template from the description data of the feature training level and the description data of the standard feature training level as the description data of the target feature training level;
wherein the method further comprises:
when the state template of the description data of the feature training level is updated, generating a description data updating interactive image of the feature training level according to the description content of the target object and the description data of the feature training level;
and sending the description data updating interactive image of the characteristic training level to the target data processing terminal so that the target data processing terminal updates the interactive image according to the description data of the characteristic training level and updates the description data of the standard characteristic training level into the description data of the characteristic training level.
5. The method of claim 1, wherein the fusion processing of the target scene interaction image through the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result comprises:
searching an interactive image state of the interactive image of the scene to be selected corresponding to the description content of the target object;
when the interactive image state of at least one scene interactive image to be selected is not reached, the target scene interactive image is processed in a fusion mode through the processing characteristic training layer, and a fusion result is obtained;
when the interactive image states of all scene interactive images to be selected are reached, fusing and processing the target scene interactive images through the target feature training layer to obtain a fusion result;
when the target feature training level is a standard feature training level, the target scene interaction image is processed by fusing the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result, including:
acquiring a standard image feature extraction method from the standard feature training level;
processing the target scene interactive image based on the standard image feature extraction method to obtain the fusion result;
the target scene interactive image configuring target image description content, when the target feature training level is a special feature training level, the special feature training level storing a candidate image feature extraction method corresponding to each of a plurality of candidate image description contents, and processing the target scene interactive image through the target feature training level corresponding to the description data of the target feature training level to obtain a fusion result, includes:
acquiring a target image feature extraction method corresponding to the target image description content from the special feature training level;
and processing the target scene interaction image based on the target image feature extraction method to obtain the fusion result.
6. An image feature fusion system based on genetic crossing algorithm is characterized by comprising a data acquisition end and a data processing terminal, wherein the data acquisition end is in communication connection with the data processing terminal, and the data processing terminal is specifically used for:
acquiring a target scene interaction image;
the target scene interaction image configures target object description content;
obtaining the description data of the characteristic training level corresponding to the description content of the target object, and obtaining the description data of the standard characteristic training level corresponding to the description content of the target object from a target data processing terminal;
the description data of the standard feature training level is updated according to the processing feature training level corresponding to the close scene interactive image to be selected, and the close scene interactive image to be selected configures the description content of the target object;
selecting and obtaining the description data of the target feature training level based on the description data of the feature training level and the updating result in the description data of the standard feature training level;
and fusing the target scene interaction image through the target characteristic training layer corresponding to the description data of the target characteristic training layer to obtain a fusion result.
7. The system of claim 6, wherein the data processing terminal is specifically configured to:
obtaining a description data training text of a characteristic training layer; the description data training text of the characteristic training level comprises object description content training information corresponding to various characteristic training level states respectively;
determining a feature training level state corresponding to the target object description content based on the correlation result of the target object description content and the training information of each object description content;
and obtaining the description data of the feature training level based on the current template of the description data training text of the feature training level and the state of the feature training level.
8. The system of claim 6, wherein the data processing terminal is specifically configured to:
performing projection processing on the target object description content to obtain target object projection description content;
determining the target data processing terminal from each data processing terminal to be selected based on the target object projection description content and the correlation result of the object projection description content vectors corresponding to the plurality of data processing terminals to be selected;
and determining the description data of the standard feature training level from the description data of the feature training level to be selected corresponding to the description content of each object to be selected in the target data processing terminal based on the correlation result between the description content of the target object and the description content of each object to be selected in the target data processing terminal.
9. The system of claim 6, wherein the data processing terminal is specifically configured to:
selecting the description data of the feature training level corresponding to the optimal state template from the description data of the feature training level and the description data of the standard feature training level as the description data of the target feature training level;
wherein the data processing terminal is further configured to:
when the state template of the description data of the feature training level is updated, generating a description data updating interactive image of the feature training level according to the description content of the target object and the description data of the feature training level;
and sending the description data updating interactive image of the characteristic training level to the target data processing terminal so that the target data processing terminal updates the interactive image according to the description data of the characteristic training level and updates the description data of the standard characteristic training level into the description data of the characteristic training level.
10. The system of claim 6, wherein the data processing terminal is specifically configured to:
searching an interactive image state of the interactive image of the scene to be selected corresponding to the description content of the target object;
when the interactive image state of at least one scene interactive image to be selected is not reached, the target scene interactive image is processed in a fusion mode through the processing characteristic training layer, and a fusion result is obtained;
when the interactive image states of all scene interactive images to be selected are reached, fusing and processing the target scene interactive images through the target feature training layer to obtain a fusion result;
wherein the data processing terminal is specifically configured to:
acquiring a standard image feature extraction method from the standard feature training level;
processing the target scene interactive image based on the standard image feature extraction method to obtain the fusion result;
wherein the data processing terminal is specifically configured to:
acquiring a target image feature extraction method corresponding to the target image description content from a special feature training level;
and processing the target scene interaction image based on the target image feature extraction method to obtain the fusion result.
CN202110841757.3A 2021-07-26 2021-07-26 Genetic crossing algorithm-based image feature fusion method and system Pending CN113610127A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110841757.3A CN113610127A (en) 2021-07-26 2021-07-26 Genetic crossing algorithm-based image feature fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110841757.3A CN113610127A (en) 2021-07-26 2021-07-26 Genetic crossing algorithm-based image feature fusion method and system

Publications (1)

Publication Number Publication Date
CN113610127A true CN113610127A (en) 2021-11-05

Family

ID=78338246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110841757.3A Pending CN113610127A (en) 2021-07-26 2021-07-26 Genetic crossing algorithm-based image feature fusion method and system

Country Status (1)

Country Link
CN (1) CN113610127A (en)

Similar Documents

Publication Publication Date Title
CN109117831B (en) Training method and device of object detection network
CN109065054A (en) Speech recognition error correction method, device, electronic equipment and readable storage medium storing program for executing
CN108830329B (en) Picture processing method and device
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN114625741A (en) Data processing method and system based on artificial intelligence and cloud platform
CN113378554A (en) Medical information intelligent interaction method and system
CN112799658B (en) Model training method, model training platform, electronic device, and storage medium
CN112949746B (en) Big data processing method applied to user behavior analysis and artificial intelligence server
CN113610127A (en) Genetic crossing algorithm-based image feature fusion method and system
CN114187552A (en) Method and system for monitoring power environment of machine room
CN113485203B (en) Method and system for intelligently controlling network resource sharing
CN113626538B (en) Medical information intelligent classification method and system based on big data
CN112418287B (en) Image pre-labeling method, device, electronic equipment and medium
CN113610373A (en) Information decision processing method and system based on intelligent manufacturing
CN109766922B (en) Data processing method, data processing device, storage medium and electronic equipment
CN113626594A (en) Operation and maintenance knowledge base establishing method and system based on multiple intelligent agents
CN113609323B (en) Image dimension reduction method and system based on neural network
CN109308721B (en) Image key point positioning method and device, storage medium and electronic equipment
CN113360562A (en) Interface pairing method and system based on artificial intelligence and big data and cloud platform
CN113626688A (en) Intelligent medical data acquisition method and system based on software definition
CN115511524B (en) Advertisement pushing method, system and cloud platform
CN113609931B (en) Face recognition method and system based on neural network
CN113610133B (en) Laser data and visual data fusion method and system
CN114610723B (en) Data processing method and system based on artificial intelligence and cloud platform
CN115079881A (en) Virtual reality-based picture correction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination