CN111160453A - Information processing method and device and computer readable storage medium - Google Patents

Information processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN111160453A
CN111160453A CN201911378909.XA CN201911378909A CN111160453A CN 111160453 A CN111160453 A CN 111160453A CN 201911378909 A CN201911378909 A CN 201911378909A CN 111160453 A CN111160453 A CN 111160453A
Authority
CN
China
Prior art keywords
image
information
category
category information
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911378909.XA
Other languages
Chinese (zh)
Inventor
李睿易
杜杨洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911378909.XA priority Critical patent/CN111160453A/en
Publication of CN111160453A publication Critical patent/CN111160453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an information processing method, which comprises the following steps: acquiring a first image; inputting the first image into a trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs; acquiring a second image; inputting the second image into the trained artificial neural network model to obtain second category information; obtaining first difference information based on the first category information and the second category information; wherein the first difference information is used for representing the difference between the first category information and the second category information. The embodiment of the application also discloses an information processing device and a computer readable storage medium.

Description

Information processing method and device and computer readable storage medium
Technical Field
The present invention relates to the field of mobile electronic devices, and in particular, to an information processing method, device, and computer-readable storage medium.
Background
Because the mode of realizing children education through the application programs in the electronic equipment has good flexibility and great convenience, more and more families choose to install application programs of various categories in the electronic equipment to realize family education on children, particularly infants. However, the existing various categories of applications manage and display data preset by the applications based on categories fixedly set by the applications themselves, and for data which is input by a user through other ways, is more interesting for children at the current time, and does not belong to the categories fixedly set by the applications themselves, the applications cannot perform category division and category difference analysis on the data.
Disclosure of Invention
In view of the above, embodiments of the present application are expected to provide an information processing method, an information processing apparatus, and a computer-readable storage medium, which can solve the problem that, in the related art, an application cannot perform category classification and category difference analysis on data other than the preset categories.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
an information processing method, the method comprising:
acquiring a first image;
inputting the first image into a trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second category information;
obtaining first difference information based on the first category information and the second category information; wherein the first difference information is used for representing the difference between the first category information and the second category information.
Optionally, the method further includes:
acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet training ending conditions to obtain the trained artificial neural network model.
Optionally, the method further includes:
determining a parameter training rule;
determining a training end condition based on the training rule;
acquiring a plurality of third images;
and adjusting parameters of the artificial neural network model based on the parameter training rule and the third images until the parameters of the artificial neural network model meet the training end condition, so as to obtain the trained artificial neural network model.
Optionally, the inputting the first image into the trained artificial neural network model to obtain the first category information includes:
acquiring characteristic dimension information;
inputting the first image into the trained artificial neural network model;
and processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
Optionally, the processing the first image by using the trained artificial neural network model based on the feature dimension information to obtain first category information includes:
processing the first image by using the trained artificial neural network model to obtain third category information;
determining the first category information based on the feature dimension information and the third category information.
Optionally, the acquiring the second image includes:
obtaining the second image based on historical operating information of the user, and/or,
and acquiring the second image based on the first category information.
Optionally, the obtaining the second image based on the first category information includes:
acquiring image correlation parameters;
determining target characteristic information based on the image correlation parameters and the first category information;
determining the second image based on the target feature information.
Optionally, the method further includes:
performing editing operation on the first image and/or the second image to obtain a third image;
and inputting the third image into the trained artificial neural network model to obtain fourth category information.
An information processing apparatus, the apparatus comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing the program of the data reading method in the memory to realize the following steps:
acquiring a first image;
inputting the first image into a trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second category information;
obtaining first difference information based on the first category information and the second category information; wherein the first difference information is used for representing the difference between the first category information and the second category information.
A computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the information processing method of any one of the preceding claims.
The information processing method provided by the embodiment of the application obtains the first image, inputs the first image into the trained artificial neural network model to obtain the first class information, obtains the second image, inputs the second image into the trained artificial neural network model to obtain the second class information, and obtains the first difference information based on the first class information and the second class information. Therefore, the information processing method provided by the embodiment of the application can process the first image through the trained artificial neural network model to obtain the first category information, and can process the second image through the trained artificial neural network model to obtain the second category information, so that the accurate identification and classification functions of the trained artificial neural network model on the first image and the second image are fully utilized, and the first difference information between the first category information and the second category information is further obtained.
Drawings
FIG. 1 is a diagram illustrating an application program for classifying images in the prior art;
FIG. 2 is a flowchart of a first information processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a second information processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a third information processing method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of editing a first image and a second image in an information processing method provided by an embodiment of the present invention;
fig. 6 is a flowchart of a specific implementation of an information processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating obtaining of interpretation information in an information processing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
With the development of networks and the powerful functions of terminals and the tension of life rhythm, ways of implementing family education, particularly children's family education, through various application programs on terminals are being adopted by more and more families.
In the current application market, there are a wide variety of applications related to childhood education. These applications have some common features: the method and the device can realize the classification and management of the data solidified in the application program and the management and display of the data corresponding to the classification categories preset in the application program. As shown in fig. 1, the application may implement a function of helping children recognize animals, and some categories, such as rabbits, butterflies, bees, horses, dogs, dolphins, monkeys, etc., are solidified and stored in the application, and the application is basically recognizable for animals within these categories, but the application cannot recognize and classify animals outside these categories, such as tigers, pangolins, etc., and even some similar animals may be mistakenly recognized, such as dogs in the right half of fig. 1, which are easily recognized as monkeys, dolphins, horses, and dogs.
The application programs of the above category cannot perform classification management on data such as pictures input by children, and particularly cannot perform classification and management on data such as pictures input by children and not matched with classification categories preset in the application programs, and further cannot summarize category difference information between the data such as the pictures and the classification categories preset in the application programs.
Based on this, an embodiment of the present application provides an information processing method, which is implemented by an information processing apparatus, as shown in fig. 2, and includes the steps of:
step 101, acquiring a first image.
In step 101, the first image may be an image recognizable by the electronic device.
The electronic device may be a mobile electronic device, such as a smart phone, a notebook computer, or the like; the electronic equipment can also be a smart television.
In one embodiment, the first image may be an image stored in the electronic device itself, for example, an image stored in a file management system of the electronic device.
In one embodiment, the first image may be an image stored in a database of the electronic device.
In one embodiment, the first image may be an image captured by an image capturing device of the electronic device. Such as images taken randomly using a camera of the electronic device.
In one embodiment, the first image may be a screen capture image obtained by capturing a screen of a video in a playing state when the electronic device plays the video.
In one embodiment, the first image may be an image with a first target object. The first target object may be a dog, a mountain or a flower, and accordingly, the first image may be an image with a dog, a mountain or a flower.
In one embodiment, the first image may be an image with N target objects, where N is an integer greater than 2, such as the first image, the second image, and the nth image.
In one embodiment, the first image may be an image that is not classified by an application of the electronic device.
In one embodiment, the first image may be an image that does not match any of the classification categories preset in the application program of the electronic device.
Step 102, inputting the first image into a trained Artificial Neural Network (ANN) model to obtain first category information.
The first category information is used for representing information of a category to which the first image belongs.
In step 102, the ANN model is a mathematical model that is based on the basic principle of neural networks in biology, and simulates the processing mechanism of the neural system of the human brain to complex information based on the network topology knowledge as the theoretical basis after understanding and abstracting the structure of the human brain and the response mechanism of external stimuli. The ANN model is characterized by parallel distributed processing capability, high fault tolerance, intellectualization, self-learning capability and the like, combines processing and storage of information, and attracts attention in various subject fields by a unique knowledge representation mode and an intelligent self-adaptive learning capability. It is actually a complex network of interconnected simple elements, highly non-linear, system capable of complex logical operations and non-linear relationship realization.
An ANN model is an operational model, which is formed by a large number of nodes (or neurons) connected to each other. Each node represents a particular output function, called the activation function. The connections between each two nodes represent a weighted value, called weight, for the signal passing through the connection, in such a way that the neural network simulates human memory. The output of the network depends on the structure of the network, the way the network is connected, the weights and the activation functions. The network itself is usually an approximation to some algorithm or function in nature, and may also be an expression of a logic strategy. The construction concept of the neural network is inspired by the operation of the biological neural network. The artificial neural network combines the knowledge of the biological neural network with a mathematical statistical model and is realized by a mathematical statistical tool. On the other hand, in the artificial perception field of artificial intelligence, a neural network can have the human-like decision ability and simple judgment ability by a mathematical statistics method, and the method is a further extension of traditional logic calculation.
The ANN model has the following basic characteristics:
high parallelism: the artificial neural network is formed by combining a plurality of identical simple processing units in parallel, and although the function of each neuron is simple, the parallel processing capability and effect of a large number of simple neurons are quite remarkable. The artificial neural network is similar to the human brain in not only structural parallelism but also parallel and simultaneous processing sequence. While processing units within the same layer all operate simultaneously, i.e., the computational functions of the neural network are distributed over multiple processing units, a typical computer typically has one processing unit with a serial processing sequence.
Highly nonlinear global effects: each neuron of the artificial neural network receives the input of a large number of other neurons, the output is generated through the parallel network, the other neurons are influenced, the mutual restriction and mutual influence between the networks are realized, the nonlinear mapping from an input state to an output state space is realized, and from the global point of view, the overall performance of the network is not the superposition of the local performance of the network, but shows certain collective behavior.
Associative memory function and good fault tolerance: the artificial neural network stores the processed data information in the weight values among the neurons through a specific network structure of the artificial neural network, has an association memory function, and does not see the memorized information content from a single weight value, so that the artificial neural network is in a distributed storage form, has good fault tolerance, can perform mode information processing work such as feature extraction, defect mode restoration, cluster analysis and the like, and can perform mode association, classification and identification. It can learn from imperfect data and patterns and make decisions. Because knowledge exists in the whole system, not only in one storage unit, nodes with the reserved proportion do not participate in operation, and the performance of the whole system is not greatly influenced. The method can process noisy or incomplete data, and has generalization function and strong fault-tolerant capability.
Good self-adaptation, self-learning function: the artificial neural network obtains the weight and the structure of the network through learning training, and presents strong self-learning capability and self-adaption capability to the environment. The self-learning process of the neural network simulates human visual thinking, which is a non-logical non-language completely different from the traditional symbolic logic. Adaptivity the solution to the problem is found by learning and training from the data provided, finding the intrinsic relationship between the input and output, rather than relying on empirical knowledge and rules of the problem, and thus having an adaptive function, which is very beneficial to de-emphasis of weight determination artifacts.
Distributed storage of knowledge: in a neural network, knowledge is not stored in a specific storage unit, but is distributed throughout the system, and many links are required to store a plurality of knowledge.
Non-convexity: the direction of evolution of a system will, under certain conditions, depend on a particular state function. For example an energy function, the extreme values of which correspond to a more stable state of the system. Non-convexity means that the function has a plurality of extreme values, so that the system has a plurality of stable equilibrium states, which leads to the diversity of the system evolution.
The ANN model also has the following intelligent characteristics:
associative memory function: since the neural network has the performance of distributed storage information and parallel computation, the neural network has the capacity of associative memory of external stimulation and input information.
Classification and identification functions: the neural network has strong recognition and classification capability on external input samples. The classification of the input samples is actually to find the segmentation regions in the sample space that meet the classification requirements, and the samples in each region belong to one class.
Optimizing a calculation function: the optimization calculation refers to finding a group of parameter combinations under the known constraint condition, and enabling the objective function determined by the combination to be minimum.
The non-linear mapping function: the reasonably designed ANN model can approach any complex nonlinear function with any precision theoretically by training and learning system input and output samples. This excellent performance of neural networks makes it possible to act as a general mathematical model of multidimensional non-linear functions.
When constructing the ANN model, the transfer functions and transfer functions of its neurons are already determined. The transfer function cannot be changed during the learning of the ANN model, and therefore, if it is desired to change the outcome of the output of the ANN model, this can only be achieved by changing the input of the weighted sum. Because the neurons can only respond to the input signals of the ANN model, and the weight parameters of the network neurons can only be modified when the weighted input of the ANN model is required to be changed, the training of the ANN model is the process of changing the weight matrix.
The training of the ANN model can be realized through deep learning, the deep learning algorithm breaks through the limit of the traditional neural network on the number of layers, and the number of network layers can be selected according to the needs of designers. The ANN model obtained through deep learning training not only greatly improves the precision of image recognition, but also avoids the work of needing to consume a large amount of time to extract artificial features, so that the online operation efficiency is greatly improved.
In step 102, the trained ANN model may be an ANN model trained by deep learning.
In one embodiment, the trained ANN model may be trained by deep learning and may be identified for data of interest to the child.
In one embodiment, the trained ANN model may be trained by deep learning and may be identified for target object data in images of interest to the infant.
In step 102, the first category information indicating the category to which the first image belongs may be category information to which a first target object in the first image belongs, for example, whether the first target object is an animal or a plant.
In one embodiment, the first category information used for indicating the information of the category to which the first image belongs may be category information to which at least two target objects in the first image belong, such as category information to which the first target object and the mth target object belong, where M is an integer greater than or equal to 2.
And step 103, acquiring a second image.
In step 103, the second image may be an image stored in an application program.
In one embodiment, the second image may be an image that has been classified in the application.
And 104, inputting the second image into the trained ANN model to obtain second category information.
The second category information is used for representing information of a category to which the second image belongs;
the second category information obtained in step 104 for representing the information of the category to which the second image belongs may be category information to which a second target object in the second image belongs, for example, whether the second target object is an animal or a plant.
In one embodiment, the second category information used for indicating the information of the category to which the second image belongs may be category information to which at least two target objects in the second image belong, such as category information to which the second target object and the mth target object belong, where M is an integer greater than or equal to 2.
And 105, obtaining first difference information based on the first category information and the second category information.
The first difference information is used for representing the difference between the first category information and the second category information.
In step 105, the first difference information indicating the difference between the first category information and the second category information may be a set of difference information of all corresponding information items in the first category information and the second category information.
In one embodiment, the first difference information indicating the difference between the first category information and the second category information may be a set of difference information of partially corresponding information items in the first category information and the second category information.
In one embodiment, the first difference information indicating a difference between the first category information and the second category information may be a set of difference information of corresponding information items preset in the first category information and the second category information.
In an embodiment, the first difference information indicating the difference between the first category information and the second category information may be obtained by displaying the first category information by the information processing apparatus, receiving a selection instruction for an information entry in the first category information to obtain a first target information entry list, searching for a corresponding second target information entry list in the second category information based on the first target information entry list, and then displaying each information entry in the first target information entry list and each information entry in the second target information entry list.
In one embodiment, the first difference information indicating the difference between the first category information and the second category information may be obtained by the information processing apparatus displaying the second category information, receiving a selection instruction for an information entry in the second category information, obtaining a second target information entry list, searching the first category information for a corresponding first target information entry list based on the second target information entry list, and then displaying each information entry in the second target information entry list and each information entry in the first target information entry list.
In an embodiment, the first difference information indicating the difference between the first category information and the second category information may be obtained by analyzing and summarizing each information item in the first target information item list and each information item in the second target information item list by the information processing apparatus.
The information processing method provided by the embodiment of the application obtains the first image, inputs the first image into the trained artificial neural network model to obtain the first class information, obtains the second image, inputs the second image into the trained artificial neural network model to obtain the second class information, and obtains the first difference information based on the first class information and the second class information. Therefore, the information processing method provided by the embodiment of the application can process the first image through the trained artificial neural network model to obtain the first category information, and can process the second image through the trained artificial neural network model to obtain the second category information, so that the accurate identification and classification functions of the trained artificial neural network model on the first image and the second image are fully utilized, and the first difference information between the first category information and the second category information is further obtained.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method, as shown in fig. 3, the information processing method includes the following steps:
step 201, a first image is acquired.
And 202, acquiring characteristic dimension information.
In step 202, the feature dimension information may be used to represent a target feature information item in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be used to represent a target feature information item in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be used to represent at least two target feature information items in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be at least one target feature information item obtained by the information processing device performing preliminary identification on the first image.
In one embodiment, the feature dimension information may be obtained by the information processing apparatus performing preliminary identification on the first image, presenting a plurality of target feature information items obtained, and then receiving a selection operation on the plurality of target feature information items.
In one embodiment, the feature dimension information may be a target feature information item that the information processing device receives user input.
And step 203, inputting the first image into the trained ANN model.
And 204, processing the first image by using the trained ANN model based on the characteristic dimension information to obtain first class information.
In step 204, the characteristic dimension information may be used as control information, and the control information is input into the trained ANN model, and the trained ANN model is controlled to process the first image, so as to obtain the first category information.
In an embodiment, the feature dimension information may be input into the trained ANN model as additional information of the first image, and the trained ANN model is used to process the first image to obtain the first category information.
In one embodiment, step 204 may be implemented as follows:
processing the first image by using the trained ANN model to obtain third category information; and determining the first category information based on the feature dimension information and the third category information.
Specifically, the first image may be directly input into the ANN model to obtain third category information, then target feature information items matched with the feature dimension information are selected from the third category information based on the feature dimension information, and the target feature information items are summarized to obtain the first category information.
Step 205, acquiring a second image based on the historical operation information of the user, and/or acquiring the second image based on the first category information.
In step 205, the historical operation information of the user may be the historical operation information of the user in the current application program.
In one embodiment, the historical operation information of the user may be historical operation information of the user in any application program in the electronic device, for example, historical operation information of the user in a file manager.
In a real-time mode, the historical operation information of the user may be historical operation information of any other application program except the current application program in the electronic device, for example, an online browsing operation performed by the user in a browser.
In one embodiment, the historical operation information of the user may be historical operation information executed on the electronic device by the user within a certain preset time period.
In one embodiment, the historical operation information of the user may be a specific type of historical operation information executed on the electronic device by the user within a certain preset time period.
In one embodiment, the historical operation information of the user may be historical operation information of picture browsing performed on the electronic device by the user within a certain preset time period.
In step 205, the second image acquired based on the historical operation information of the user may be any one of the images acquired based on the historical operation information of the user and unrelated to the first image.
In one embodiment, the second image obtained based on the historical operation information of the user may be any one of the images obtained based on the historical operation information of the user in relation to the first image.
In step 205, the second image may be any image that is not related to the first image.
In one embodiment, the second image acquired may be any image that is related to the first image.
In one embodiment, the acquired second image may be an image stored in the current application.
Illustratively, the acquiring of the second image in step 205 based on the first category information may be implemented by:
and step A1, acquiring image related parameters.
In step a1, an image correlation parameter is used to indicate the degree of correlation with the first category information. The larger the image association parameter is, the stronger the association with the first class information is, that is, the stronger the association between the second image to be acquired and the first image is, that is, the class to which the second image and the first image belong is closer; conversely, the second image is relatively far from the category to which the first image belongs.
In one embodiment, the image association parameter may be preset in the current application.
In one embodiment, the image association parameter may be set by the user based on interests.
In one embodiment, the image association parameter may be set by the user based on the need for image recognition.
In one embodiment, the value of the image-related parameter is adjustable.
Step A2, determining target characteristic information based on the image correlation parameters and the first category information.
In one embodiment, the target feature information in step a2 may be category information corresponding to each feature information item in the first category information, which is obtained based on the image association parameter and the first category information.
In one embodiment, the target feature information in step a2 may be determined by determining fourth category information based on the image-related parameter and the second category information, selecting feature information items from the fourth category information, and aggregating the feature information items.
In one embodiment, step a2 may be implemented by determining fourth category information based on the image association parameter and the second category information, and receiving the target feature information determined by the user selecting a category information entry in the fourth category information.
Step A3, based on the target feature information, determines a second image.
In one embodiment, step a3 may be the second image obtained by retrieving the image in the database of the current application according to the target feature information.
And step 206, inputting the second image into the trained ANN model to obtain second category information.
Step 207, obtaining first difference information based on the first category information and the second category information.
The information processing method provided by the embodiment of the application comprises the steps of obtaining a first image, obtaining characteristic dimension information, inputting the first image into a trained ANN model, processing the first image by using the trained ANN model based on the characteristic dimension information to obtain first category information, obtaining a second image based on historical operation information of a user, and/or obtaining a second image based on the first category information, inputting the second image into the trained ANN model to obtain second category information, and obtaining first difference information based on the first category information and the second category information. Therefore, according to the information processing method provided by the embodiment of the application, the trained ANN model is used for processing the first image according to the feature dimension information to obtain the first category information, so that the first feature information can be flexibly adjusted according to the feature dimension information, then the second image is obtained according to the historical operation record of the user and/or the first category information, the second image is input into the trained ANN model to obtain the second category information, and finally the first difference information of the difference between the first category information and the second category information is obtained.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method, as shown in fig. 4, the information processing method including the steps of:
step 301, a first image is acquired.
Step 302, inputting the first image into the trained ANN model to obtain first category information.
The first category information is used for representing information of a category to which the first image belongs.
Illustratively, training of the ANN model needs to be completed before step 301-302.
In the embodiment of the present application, the training of the ANN model can be realized through steps B1-B2:
and B1, acquiring image sample data.
In step B1, the acquired image sample data may be an image including various types of information.
In one embodiment, the acquired image sample data may be an image from which certain specific types of information have been removed.
And B2, adjusting parameters of the ANN model based on the image sample data until the parameters of the ANN model meet the training end conditions, and obtaining the trained artificial neural network model.
In step B2, the training end condition may be a preset condition.
In one embodiment, the training end condition may be a preset error threshold between the training result and the expected result.
In one embodiment, the training end condition may be a preset error threshold of the classification error of the ANN model.
Specifically, the training result of the ANN model shown in step B2 is a supervised ANN model training learning method, which may also be referred to as an error correction training method. According to the method, a training objective function, namely a training end condition, is set, then the correction of the network connection weight is carried out according to the error between the actual output and the expected output of the ANN model, so that the output error of the ANN model is smaller than the training objective function, namely the training end condition, the output of the ANN model meets the expected effect, and the trained ANN model is obtained finally.
In the embodiment of the present application, the training of the ANN model may be further implemented by steps C1-C4:
and step C1, determining a parameter training rule.
In step C1, the parameters are used to train rules for training the ANN model.
In one embodiment, the parameters train the rules, which may include expected results of the ANN model processing various types of data.
In one embodiment, the parameters train the rules, which may include error thresholds for the ANN model to process various types of data.
In one embodiment, the parameters may train the rules, which may include a convergence of processing errors of the ANN model on various types of data.
And step C2, determining training ending conditions based on the training rules.
In step C2, the training end condition may be an error threshold between the training result and the expected result of the ANN model.
In one embodiment, the training end condition may be an error threshold for ANN model classification errors.
And C3, acquiring a plurality of third images.
In step C3, the plurality of third images may be a plurality of images input by the user in the current application.
In one embodiment, the plurality of third images may be a plurality of images stored in the current application.
In one embodiment, the plurality of third images may be a plurality of images of different types.
In one embodiment, the plurality of third images may be a plurality of images of the same type.
In one embodiment, the plurality of third images may be a plurality of images operated by the user in other applications of the electronic device.
And step C4, adjusting parameters of the ANN model based on the parameter training rules and the plurality of third images until the parameters of the ANN meet the training end conditions, and obtaining the trained ANN model.
Specifically, the training method in step C4 is an unsupervised learning training process of the ANN model. The method is mainly realized by performing self-organizing learning of the ANN model according to some provided samples, the learning process can have no expected output, and the neurons of the ANN model compete with each other to respond to an external stimulation mode, so that the network weight of the ANN model is adjusted to adapt to the input sample data.
In practical application, for unsupervised learning training of the ANN model, the ANN model may be directly set in an application environment, and a training phase and an application phase are combined into one.
Step 303, acquiring a second image.
And step 304, inputting the second image into the trained ANN model to obtain second category information.
Step 305, obtaining first difference information based on the first category information and the second category information.
The first difference information is used for representing the difference between the first category information and the second category information.
And step 306, performing editing operation on the first image and/or the second image to obtain a third image.
In step 306, an editing operation is performed on the first image to obtain a third image, which may be obtained by removing the screen information of the first area of the first image.
In one embodiment, the editing operation is performed on the first image to obtain the third image, and the selecting operation may be performed on the first region of the first image, and the selected image may be the third image.
In one embodiment, the editing operation is performed on the first image to obtain a third image, the fourth image may be selected, and the first region of the first image may be replaced with the fourth image to obtain the third image.
In step 306, the second image is edited to obtain a third image, which may be the third image obtained by removing the screen information of the second area of the second image.
In one embodiment, the editing operation is performed on the second image to obtain the third image, and the selecting operation may be performed on the second area of the second image, and the selected image is the third image.
In one embodiment, the editing operation is performed on the second image to obtain a third image, the fourth image may be selected, and the second region of the second image may be replaced with the fourth image to obtain the third image.
In step 306, an editing operation is performed on the first image and the second image to obtain a third image, which may be selecting the first region of the first image to obtain a fifth image, selecting the second region of the second image to obtain a sixth image, then replacing the first region of the first image with the sixth image to obtain a seventh image, and replacing the second region of the second image with the fifth image to obtain an eighth image. Alternatively, the third image may be a seventh image, or the third image may be an eighth image.
In one embodiment, the editing operation is performed on the first image and the second image to obtain a third image, where the first area of the first image is selected to obtain a fifth image, the second area of the feature information corresponding to the target feature information at the first area of the first image is selected in the second image to obtain a sixth image, the first area of the first image is replaced by the sixth image to obtain a seventh image, and the second area of the second image is replaced by the fifth image to obtain an eighth image. Alternatively, the third image may be a seventh image, or the third image may be an eighth image.
Specifically, as shown in fig. 5, the head feature information area in the left image in fig. 5, i.e., the first image, may be selected, the head feature information area in the middle image, i.e., the second image, may also be selected, and then the head in the first image may be replaced with the head in the second image, so as to obtain the third image in the right portion in fig. 5. The third image shown in the right part of fig. 5 includes feature information of the bird's body part in the left image of fig. 5, i.e., the first image, and also includes head feature information of the second image described in the middle part of fig. 5.
And 307, inputting the third image into the trained ANN model to obtain fourth category information.
Specifically, step 307 may be implemented by the operation of step 302 or step 304.
Illustratively, in step 307, feature information of the edited portion in the third image may also be used as a part of the fourth category information.
In one embodiment, the fourth category information may be feature information of an edited portion of the third image.
In one embodiment, the fourth category information may be feature information of a portion of the third image other than the edited portion.
In one embodiment, before step 307, the infant may recognize the third image, and then step 307 may be executed to process the third image to obtain the fourth category information, thereby increasing the interest of the whole information processing method.
The information processing method provided by the embodiment of the application comprises the steps of obtaining a first image, inputting the first image into a trained ANN model to obtain first category information, obtaining a second image, inputting the second image into the trained ANN model to obtain second category information, obtaining first difference information based on the first category information and the second category information, performing editing operation on the first image and/or the second image to obtain a third image, and inputting the third image into the trained ANN model to obtain fourth category information. Therefore, the information processing method provided by the embodiment of the application can realize classification of the first image and the second image and recognition processing of the category information of the edited third image, and therefore the information processing method provided by the embodiment of the application can realize category classification and category difference analysis of any image.
Based on the foregoing embodiments, the embodiments of the present application provide a specific implementation flow of an information processing method, as shown in fig. 6. When the information processing method is started, detecting an input source of an image, judging whether the image is the image input by a user, and if the image is the image input by the user, judging that a first image input currently is the user image; if the image is not the image input by the user, judging that the first image input currently is the existing image in the database. The method comprises the steps of inputting a first image which is input currently into an ANN model for processing to obtain first class information of the first image, interpreting the first class information to obtain a class information interpretation text, and outputting the class information interpretation text.
The category information interpretation text may be description information of the first image, a definition of the first category, and interpretation information that is a reason for dividing the first image into the first category. Specifically, as shown in fig. 7. How to get the category definition information from the input image and further to get the interpretation information from the input image and the category definition information is explained in detail in fig. 7.
In the left part of fig. 7, there is a two-dimensional coordinate system of category information and image information, by which image description information of any one of the images (e.g., the first image, the second image) input by the information processing apparatus can be obtained, and category definition information (e.g., the first category information, the second category information, etc.) obtained after any one of the images (e.g., the first image, the second image) is input to the ANN model, and interpretation information can also be obtained on the basis of the image description information and the category definition information.
Illustratively, as shown in the right part of fig. 7, the first picture input picture, i.e., the first picture, in the right part of fig. 7 is north america \40458 \\40393whichcorresponds to the image description information: this is a large waterfowl with a white neck and a black back; the category definition information obtained by inputting the image into the ANN model is as follows: \40458 \40393isa waterfowl with yellow sharp beaks, necks and abdomens all white and black backs; the interpretation information obtained according to the image description information and the category definition information is: this is a north american publication of \40458 \40393becausethis bird has a long neck, yellow, sharp beak and red eye.
The second picture input picture, i.e. the first picture, at the right part of fig. 7 is an irisory, and the image description information corresponding to the image is: this is a large bird with a white belly and black wings; the category definition information obtained by inputting the image into the ANN model is as follows: the Condor is a hooked seabird with yellow beak, white abdomen and black back. The interpretation information obtained according to the image description information and the category definition information is: this is a howly because the bird has a large wing, a curved yellow beak and a white belly.
The third picture input picture, i.e. the first picture, at the right part of fig. 7 is an irisory, and the image description information corresponding to the image is: image description information: this is a large bird with a white belly and a black back; the category definition information obtained by inputting the image into the ANN model is as follows: the Condor is a hooked seabird with yellow beak, white abdomen and black back. The interpretation information obtained according to the image description information and the category definition information is: this is a howly because the bird has a crook-like yellow beak, white belly and black back.
It should be noted that although the similarity between the second picture and the third picture in the right part of fig. 7 is not high, the ANN model can still accurately identify the target object in the image, i.e., the hoidel. In addition, the image description information in the embodiment of the present application may be obtained by performing image recognition on an input image.
Optionally, in fig. 6, the second image may be input into the information processing apparatus, and the ANN model processes the second image to obtain the second category information, and on the basis of the first category information and the second category information, difference information between the first category information and the second category information may also be obtained.
In fig. 6, any one of the input images may be an image taken by the user, an image browsed or saved by the user in another application program, an image inherent to the current application program, or an image obtained after editing the original image as described in the foregoing embodiment.
Alternatively, in fig. 6, the interpretation information and/or the difference information may be output in a voice manner.
The specific implementation flow of the information processing method provided by the embodiment of the application can process any image to obtain the image description information of the image, input the image into the ANN model to obtain the category information corresponding to the image, and then obtain the interpretation information and/or the difference information of the category information corresponding to other images based on the category information and the image description information.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing apparatus 4, as shown in fig. 8, the information processing apparatus 4 including: a processor 41, a memory 42, and a communication bus 43;
wherein, the communication bus 43 is used for realizing the communication connection between the processor 41 and the memory 42;
the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
acquiring a first image;
inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second class information;
obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing the difference between the first category information and the second category information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet training ending conditions to obtain the trained artificial neural network model.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
determining a parameter training rule;
determining training end conditions based on the training rules;
acquiring a plurality of third images;
and adjusting parameters of the artificial neural network model based on the parameter training rule and the plurality of third images until the parameters of the artificial neural network model meet the training end condition, so as to obtain the trained artificial neural network model.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
inputting the first image into the trained artificial neural network model to obtain first class information, wherein the first class information comprises:
acquiring characteristic dimension information;
inputting the first image into the trained artificial neural network model;
and processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
based on the feature dimension information, processing the first image by using the trained artificial neural network model to obtain first class information, which comprises the following steps:
processing the first image by using the trained artificial neural network model to obtain third category information;
and determining the first category information based on the feature dimension information and the third category information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
acquiring a second image comprising:
based on the user's historical operating information, a second image is acquired, and/or,
and acquiring a second image based on the first category information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
acquiring a second image based on the first category information, including:
acquiring image correlation parameters;
determining target characteristic information based on the image correlation parameters and the first category information;
based on the target feature information, a second image is determined.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
performing editing operation on the first image and/or the second image to obtain a third image;
and inputting the third image into the trained artificial neural network model to obtain fourth class information.
The information processing apparatus provided in the embodiment of the application acquires a first image, inputs the first image into a trained artificial neural network model to obtain first category information, acquires a second image, inputs the second image into the trained artificial neural network model to obtain second category information, and obtains first difference information based on the first category information and the second category information. Therefore, the information processing device provided by the embodiment of the application can process the first image through the trained artificial neural network model to obtain the first category information, and can process the second image through the trained artificial neural network model to obtain the second category information, so that the accurate identification and classification functions of the trained artificial neural network model on the first image and the second image are fully utilized, and the first difference information between the first category information and the second category information is further obtained.
Based on the foregoing embodiments, the present application further provides a computer-readable storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the steps of any information processing method in the foregoing embodiments.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The methods disclosed in the method embodiments provided by the present application can be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in various product embodiments provided by the application can be combined arbitrarily to obtain new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided herein may be combined in any combination to arrive at new method or apparatus embodiments without conflict.
The computer-readable storage medium may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An information processing method, the method comprising:
acquiring a first image;
inputting the first image into a trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second category information;
obtaining first difference information based on the first category information and the second category information; wherein the first difference information is used for representing the difference between the first category information and the second category information.
2. The method of claim 1, further comprising:
acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet training ending conditions to obtain the trained artificial neural network model.
3. The method of claim 1, further comprising:
determining a parameter training rule;
determining a training end condition based on the training rule;
acquiring a plurality of third images;
and adjusting parameters of the artificial neural network model based on the parameter training rule and the third images until the parameters of the artificial neural network model meet the training end condition, so as to obtain the trained artificial neural network model.
4. The method of claim 1, wherein inputting the first image into a trained artificial neural network model to obtain the first category information comprises:
acquiring characteristic dimension information;
inputting the first image into the trained artificial neural network model;
and processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
5. The method of claim 4, wherein the processing the first image using the trained artificial neural network model based on the feature dimension information to obtain a first category information comprises:
processing the first image by using the trained artificial neural network model to obtain third category information;
determining the first category information based on the feature dimension information and the third category information.
6. The method of claim 1, wherein said acquiring a second image comprises:
obtaining the second image based on historical operating information of the user, and/or,
and acquiring the second image based on the first category information.
7. The method of claim 6, wherein the obtaining the second image based on the first category information comprises:
acquiring image correlation parameters;
determining target characteristic information based on the image correlation parameters and the first category information;
determining the second image based on the target feature information.
8. The method of claim 1, further comprising:
performing editing operation on the first image and/or the second image to obtain a third image;
and inputting the third image into the trained artificial neural network model to obtain fourth category information.
9. An information processing apparatus characterized by comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing the program of the data reading method in the memory to realize the following steps:
acquiring a first image;
inputting the first image into a trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second category information;
obtaining first difference information based on the first category information and the second category information; wherein the first difference information is used for representing the difference between the first category information and the second category information.
10. A computer-readable storage medium characterized by storing one or more programs, which are executable by one or more processors, to implement the steps of the information processing method according to any one of claims 1 to 8.
CN201911378909.XA 2019-12-27 2019-12-27 Information processing method and device and computer readable storage medium Pending CN111160453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911378909.XA CN111160453A (en) 2019-12-27 2019-12-27 Information processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911378909.XA CN111160453A (en) 2019-12-27 2019-12-27 Information processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111160453A true CN111160453A (en) 2020-05-15

Family

ID=70558651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911378909.XA Pending CN111160453A (en) 2019-12-27 2019-12-27 Information processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111160453A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140214739A1 (en) * 2012-11-06 2014-07-31 International Business Machines Corporation Cortical simulator
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN108564066A (en) * 2018-04-28 2018-09-21 国信优易数据有限公司 A kind of person recognition model training method and character recognition method
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN108875821A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN110188613A (en) * 2019-04-28 2019-08-30 上海鹰瞳医疗科技有限公司 Image classification method and equipment
CN110288049A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 Method and apparatus for generating image recognition model
CN110309715A (en) * 2019-05-22 2019-10-08 北京邮电大学 Indoor orientation method, the apparatus and system of lamps and lanterns identification based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140214739A1 (en) * 2012-11-06 2014-07-31 International Business Machines Corporation Cortical simulator
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN108564066A (en) * 2018-04-28 2018-09-21 国信优易数据有限公司 A kind of person recognition model training method and character recognition method
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN108875821A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN110188613A (en) * 2019-04-28 2019-08-30 上海鹰瞳医疗科技有限公司 Image classification method and equipment
CN110309715A (en) * 2019-05-22 2019-10-08 北京邮电大学 Indoor orientation method, the apparatus and system of lamps and lanterns identification based on deep learning
CN110288049A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 Method and apparatus for generating image recognition model

Similar Documents

Publication Publication Date Title
Tuli et al. Are convolutional neural networks or transformers more like human vision?
Karaboga et al. A novel clustering approach: Artificial Bee Colony (ABC) algorithm
CN111597374B (en) Image classification method and device and electronic equipment
CN109858505B (en) Classification identification method, device and equipment
Salunke et al. A new approach for automatic face emotion recognition and classification based on deep networks
CN107223260B (en) Method for dynamically updating classifier complexity
CN110222577A (en) A kind of target monitoring method, apparatus, computer equipment and storage medium
US11983917B2 (en) Boosting AI identification learning
CN109522970B (en) Image classification method, device and system
CN111340112B (en) Classification method, classification device and classification server
WO2021135546A1 (en) Deep neural network interpretation method and device, terminal, and storage medium
CN112748941A (en) Feedback information-based target application program updating method and device
Terziyan et al. Causality-aware convolutional neural networks for advanced image classification and generation
CN113704534A (en) Image processing method and device and computer equipment
CN113496251A (en) Device for determining a classifier for identifying an object in an image, device for identifying an object in an image and corresponding method
JP2019023801A (en) Image recognition device, image recognition method and image recognition program
CN111160453A (en) Information processing method and device and computer readable storage medium
Rekabdar et al. Scale and translation invariant learning of spatio-temporal patterns using longest common subsequences and spiking neural networks
KR102615445B1 (en) Method, apparatus and system for providing nutritional information based on fecal image analysis
CN115346084A (en) Sample processing method, sample processing apparatus, electronic device, storage medium, and program product
Natesan et al. Birds Egg Recognition using Artificial Neural Network
KR102636461B1 (en) Automated labeling method, device, and system for learning artificial intelligence models
US20230385605A1 (en) Complementary Networks for Rare Event Detection
Kapoor et al. Bell-Pepper Leaf Bacterial Spot Detection Using AlexNet and VGG-16
US20220092348A1 (en) Concept for Generating Training Data and Training a Machine-Learning Model for Use in Re-Identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination