CN111796663A - Scene recognition model updating method and device, storage medium and electronic equipment - Google Patents

Scene recognition model updating method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111796663A
CN111796663A CN201910282003.1A CN201910282003A CN111796663A CN 111796663 A CN111796663 A CN 111796663A CN 201910282003 A CN201910282003 A CN 201910282003A CN 111796663 A CN111796663 A CN 111796663A
Authority
CN
China
Prior art keywords
scene
data
recognition model
perception
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910282003.1A
Other languages
Chinese (zh)
Other versions
CN111796663B (en
Inventor
何明
陈仲铭
黄粟
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282003.1A priority Critical patent/CN111796663B/en
Publication of CN111796663A publication Critical patent/CN111796663A/en
Application granted granted Critical
Publication of CN111796663B publication Critical patent/CN111796663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the application provides a scene recognition model updating method, a scene recognition model updating device, a storage medium and electronic equipment, wherein the scene recognition model updating method comprises the following steps: acquiring perception data of a current scene; determining target scene data from a plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene; training the perception data and the target scene data to obtain a sub-model; adding the sub-model to the scene recognition model to update the scene recognition model. In the scene identification model updating method, the electronic equipment can update the scene identification model through self-adaptive learning, so that the updated scene identification model can identify the current unknown scene, and the scene identification model can identify more scenes to improve the accuracy of the electronic equipment in identifying the scene.

Description

Scene recognition model updating method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for updating a scene recognition model, a storage medium, and an electronic device.
Background
With the development of electronic technology, electronic devices such as smart phones are capable of providing more and more services to users. For example, the electronic device may provide social services, navigation services, travel recommendation services, and the like for the user. In order to provide targeted and personalized services for users, the electronic device needs to identify the scene where the user is located.
Disclosure of Invention
The embodiment of the application provides a scene recognition model updating method and device, a storage medium and electronic equipment, and the accuracy of scene recognition of the electronic equipment can be improved.
The embodiment of the application provides a scene recognition model updating method, wherein the scene recognition model comprises scene data of a plurality of known scenes, and the scene recognition model updating method comprises the following steps:
acquiring perception data of a current scene;
determining target scene data from the plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene;
training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene;
adding the sub-model to the scene recognition model to update the scene recognition model.
An embodiment of the present application further provides a scene recognition model updating apparatus, where the scene recognition model includes scene data of a plurality of known scenes, and the scene recognition model updating apparatus includes:
the acquisition module is used for acquiring the perception data of the current scene;
a determining module, configured to determine target scene data from the plurality of scene data according to the sensing data, where a scene corresponding to the target scene data is a similar scene to the current scene;
the training module is used for training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene;
and the updating module is used for adding the sub-model into the scene recognition model so as to update the scene recognition model.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer is enabled to execute the above scene recognition model updating method.
The embodiment of the application further provides an electronic device, which comprises a processor and a memory, wherein a computer program is stored in the memory, and the processor is used for executing the scene recognition model updating method by calling the computer program stored in the memory.
The scene recognition model updating method provided by the embodiment of the application comprises the following steps: acquiring perception data of a current scene; determining target scene data from a plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene; training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene; adding the sub-model to the scene recognition model to update the scene recognition model. In the scene identification model updating method, the electronic equipment can update the scene identification model through self-adaptive learning, so that the updated scene identification model can identify the current unknown scene, and the scene identification model can identify more scenes to improve the accuracy of the electronic equipment in identifying the scene.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic view of an application scenario of a scene identification model updating method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a first method for updating a scene recognition model according to an embodiment of the present disclosure.
Fig. 3 is a second flowchart of a scene recognition model updating method according to an embodiment of the present application.
Fig. 4 is a third flowchart illustrating a scene recognition model updating method according to an embodiment of the present application.
Fig. 5 is a fourth flowchart illustrating a scene recognition model updating method according to an embodiment of the present application.
Fig. 6 is a fifth flowchart illustrating a scene recognition model updating method according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a first scene recognition model updating apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a second scene recognition model updating apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a scene identification model updating method according to an embodiment of the present application. The scene recognition model updating method is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic perception architecture is an integration of hardware and software used for realizing the scene recognition model updating method in an electronic device.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, an implicit dirichlet distribution algorithm, a bayesian classification algorithm, a Support Vector Machine (SVM), a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a recurrent neural network.
The embodiment of the application provides a scene recognition model updating method, which can be applied to electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (augmented reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a laptop computer, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 2, fig. 2 is a schematic flowchart of a first method for updating a scene recognition model according to an embodiment of the present application. The scene recognition model updating method comprises the following steps:
and 110, acquiring the perception data of the current scene.
The electronic device may obtain perceptual data of a current scene. The current scene is a scene of an environment where the electronic device is currently located, that is, a scene of an environment where a user of the electronic device is currently located. The perception data may comprise arbitrary data. For example, the perception data may include environmental data such as ambient temperature, ambient light intensity, ambient noise, and the like. The perception data may also include image data, memory data, text data displayed on the electronic device, and other operating data of the electronic device. The sensory data may also include user behavioral habit data, such as the user's work and rest time, the entertainment activities performed by the user, and the like.
The electronic device can acquire the perception data of the current scene through the information perception layer in the panoramic perception architecture. For example, the electronic device may detect an ambient temperature by a temperature sensor, detect an ambient light intensity by a light sensor, detect an ambient noise by a microphone, call image data, memory data, and the like on the electronic device by a processing circuit of the electronic device, and acquire text data displayed on the electronic device by a display control circuit. In addition, the electronic equipment can also sense and record the behavior of the user through the information sensing layer to form behavior habit data of the user.
And 120, determining target scene data from the plurality of scene data according to the perception data, wherein the scene corresponding to the target scene data is a similar scene with the current scene.
The electronic equipment is provided with a scene recognition model. The scene recognition model may include scene data for a plurality of known scenes. Wherein each of the scene data corresponds to a known scene. That is, each of the scene data is used to represent a known scene. For example, when the scene recognition model trains the scene data of the plurality of known scenes, the scene recognition model may store the scene data of the plurality of known scenes. By a known scene is meant that the scene recognition model can recognize the scene, i.e. the scene is known to the scene recognition model.
For example, the scene recognition model may include 5 scene data such as x1, x2, x3, x4, and x 5. Wherein the scene data x1 corresponds to scene y1, the scene data x2 corresponds to scene y2, the scene data x3 corresponds to scene y3, the scene data x4 corresponds to scene y4, and the scene data x5 corresponds to scene y 5.
Wherein the scenes y1, y2, y3, y4, y5 are all known scenes to the scene recognition model. The scenes y1, y2, y3, y4 and y5 are all scenes existing in the life of the user, such as walking, running, going upstairs, self-driving travel, getting up, sleeping and the like.
The current scene is a scene unknown to the scene recognition model, that is, the scene recognition model cannot recognize the current scene. After the electronic equipment acquires the perception data of the current scene, the target scene data can be determined from the plurality of scene data according to the perception data. And the scene corresponding to the target scene data and the current scene are similar scenes. For example, when the current scene is descending stairs, the scene corresponding to the target scene data determined by the electronic device may be ascending stairs. For another example, when the current scene is a bus, the scene corresponding to the target scene data determined by the electronic device may be a self-driving trip.
It should be noted that the target scene data determined by the electronic device may include only scene data of one known scene, or may include scene data of a plurality of known scenes.
And 130, training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene.
After the electronic device determines the target scene data, the perception data and the target scene data can be trained to obtain a sub-model. For example, the electronic device may train the perception data and the target scene data according to a preset machine learning algorithm. Wherein the sub-model is used for identifying the current scene. The preset Machine learning algorithm may be, for example, a Support Vector Machine (SVM) algorithm.
140, adding the sub-model to the scene recognition model to update the scene recognition model.
And after the electronic equipment obtains the sub-model, adding the sub-model into the scene recognition model so as to update the scene recognition model. And the updated scene recognition model comprises the sub-model obtained by training. Since the sub-model can identify the current scene, the updated scene identification model can identify the current scene. Thus, the updated scene recognition model can recognize more scenes.
For example, in some embodiments, the electronic device may acquire, through the information sensing layer, sensing data of a current scene, and determine, through the data processing layer, target scene data from the plurality of scene data according to the sensing data. It can be understood that before the target scene data is determined from the plurality of scene data, the data processing layer may further perform data cleaning, data transformation, and the like on the sensing data of the current scene, and determine the target scene data from the plurality of scene data according to the processed sensing data.
Then, the electronic device can train the perception data and the target scene data through a scene modeling layer to obtain a sub-model, and the sub-model is added to a scene recognition model to update the scene recognition model.
In the embodiment of the application, the electronic equipment can update the scene recognition model through self-adaptive learning, so that the updated scene recognition model can recognize the current unknown scene, and the scene recognition model can recognize more scenes to improve the accuracy of scene recognition of the electronic equipment.
In some embodiments, referring to fig. 3, fig. 3 is a second flowchart illustrating a scene recognition model updating method provided in an embodiment of the present application.
Step 120, determining target scene data from a plurality of scene data according to the perception data, including the following steps:
121, clustering the perception data and the plurality of scene data to obtain a clustering result;
and 122, determining the scene data in the same category as the perception data in the clustering result as target scene data.
After the electronic device obtains the sensing data of the current scene, the sensing data and the plurality of scene data in the scene sensing model can be clustered to obtain a clustering result. For example, the perception data and the plurality of scene data may be clustered by a K-means clustering algorithm.
Subsequently, the electronic device may determine, as target scene data, scene data in the same category as the perception data in the clustering result. For example, the perceptual data of the current scene is x 6. Scene data x1, x2 in the clustering result are in the same category as the perception data x 6. At this time, the electronic device may determine the scene data x1 and x2 as the target scene data.
For another example, when the clustering result is combined with a specific scene in the life of the user, the current scene is a scene of going down stairs, and the scene data of the scene of going up stairs in the clustering result is in the same category as the perception data of the current scene. Then, the electronic device may determine the scene data of the scene of going upstairs as the target scene data.
In some embodiments, referring to fig. 4, fig. 4 is a third flowchart illustrating a scene recognition model updating method provided in an embodiment of the present application.
Step 120, determining target scene data from a plurality of scene data according to the perception data, including the following steps:
123, respectively calculating cosine similarity of the perception data and each scene data;
and 124, determining the scene data with the cosine similarity greater than a preset threshold value as target scene data.
After the electronic equipment acquires the perception data of the current scene, the cosine similarity between the perception data and each scene data is calculated respectively. For example, the electronic device may calculate the cosine similarity of the perception data and each scene data separately through a cosine similarity algorithm.
And then, the electronic equipment determines the scene data with the cosine similarity greater than a preset threshold as target scene data according to the calculated result. The preset threshold may be a value preset in the electronic device, for example, 80%.
For example, the perceptual data of the current scene is x 6. In the result obtained by the electronic device, the cosine similarity between the scene data x3 and the perception data x6 is greater than a preset threshold. At this time, the electronic device may determine the scene data x3 as the target scene data.
For another example, the cosine similarity between the scene data of the scene of the self-driving trip in the calculation result and the perception data of the current scene is greater than the preset threshold value of 80%, wherein the current scene is taken in the public transport. Then, the electronic device may determine the scene data of the self-driving trip scene as the target scene data.
In some embodiments, referring to fig. 5, fig. 5 is a fourth flowchart illustrating a scene recognition model updating method provided in an embodiment of the present application.
Step 130, training the perception data and the target scene data to obtain a sub-model, including the following steps:
131, determining the perception data as a positive sample;
132, determining the target scene data as negative examples;
133, training the positive sample and the negative sample to obtain a support vector machine classifier;
134, determining the support vector machine classifier as a sub-model.
After the electronic equipment determines the target scene data, the sub-model can be obtained through training. The sub-model is used for identifying the current scene.
When the electronic equipment is trained, the acquired perception data is determined as a positive sample, and the determined target scene data is determined as a negative sample. Then, the electronic device trains the positive samples and the negative samples to obtain a Support Vector Machine (SVM) classifier. Subsequently, the electronic device determines the trained Support Vector Machine (SVM) classifier as a sub-model.
As the electronic equipment takes the target scene data as the negative sample to train in the training process, the trained SVM classifier can accurately distinguish the current scene from the similar scene, namely can accurately distinguish the current scene from the scene corresponding to the target scene data. Thus, the sub-model can be used to accurately identify the current scene.
In some embodiments, referring to fig. 6, fig. 6 is a fifth flowchart illustrating a scene recognition model updating method provided in an embodiment of the present application.
Before acquiring the perception data of the current scene in step 110, the method further includes the following steps:
150, judging whether the current scene is one of a plurality of known scenes;
and if the current scene is not any one of the plurality of known scenes, acquiring the perception data of the current scene.
The electronic equipment is provided with a scene recognition model. The scene recognition model may include scene data for a plurality of known scenes. Wherein each of the scene data corresponds to a known scene. That is, each of the scene data is used to represent a known scene.
When the electronic device executes the function of identifying the current scene, it may first determine whether the current scene is one of a plurality of known scenes. For example, the electronic device may invoke the scene recognition model in an attempt to recognize the current scene. If the scene recognition model can successfully recognize the current scene, the current scene is one of the plurality of known scenes. If the scene recognition model cannot successfully recognize the current scene, the current scene is not any one of the plurality of known scenes.
If the electronic device determines that the current scene is one of the known scenes, the electronic device may terminate the scene recognition procedure.
If the electronic device determines that the current scene is not any of the plurality of known scenes, the electronic device may acquire scene data of the current scene and continue to perform subsequent processing.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
From the above, the method for updating a scene recognition model provided in the embodiment of the present application includes: acquiring perception data of a current scene; determining target scene data from a plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene; training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene; adding the sub-model to the scene recognition model to update the scene recognition model. In the scene identification model updating method, the electronic equipment can update the scene identification model through self-adaptive learning, so that the updated scene identification model can identify the current unknown scene, and the scene identification model can identify more scenes to improve the accuracy of the electronic equipment in identifying the scene.
The embodiment of the application also provides a scene recognition model updating device, and the scene recognition model updating device can be integrated in the electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (augmented reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a laptop computer, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a scene recognition model updating apparatus according to an embodiment of the present application.
Wherein the scene recognition model updating apparatus 200 comprises: the system comprises an acquisition module 201, a determination module 202, a training module 203 and an updating module 204.
An obtaining module 201, configured to obtain perceptual data of a current scene.
The acquisition module 201 may acquire perceptual data of a current scene. The current scene is a scene of an environment where the electronic device is currently located, that is, a scene of an environment where a user of the electronic device is currently located. The perception data may comprise arbitrary data. For example, the perception data may include environmental data such as ambient temperature, ambient light intensity, ambient noise, and the like. The perception data may also include image data, memory data, text data displayed on the electronic device, and other operating data of the electronic device. The sensory data may also include user behavioral habit data, such as the user's work and rest time, the entertainment activities performed by the user, and the like.
The obtaining module 201 may collect the sensing data of the current scene through an information sensing layer of a panoramic sensing architecture in the electronic device. For example, the obtaining module 201 may detect an ambient temperature through a temperature sensor, detect an ambient light intensity through a light sensor, detect an ambient noise through a microphone, retrieve image data, memory data, and the like on the electronic device through a processing circuit of the electronic device, and obtain text data displayed on the electronic device through a display control circuit. In addition, the obtaining module 201 may also sense and record the behavior of the user through the information sensing layer to form behavior habit data of the user.
A determining module 202, configured to determine target scene data from multiple scene data according to the sensing data, where a scene corresponding to the target scene data is a similar scene to the current scene.
The electronic equipment is provided with a scene recognition model. The scene recognition model may include scene data for a plurality of known scenes. Wherein each of the scene data corresponds to a known scene. That is, each of the scene data is used to represent a known scene. For example, when the scene recognition model trains the scene data of the plurality of known scenes, the scene recognition model may store the scene data of the plurality of known scenes. By a known scene is meant that the scene recognition model can recognize the scene, i.e. the scene is known to the scene recognition model.
For example, the scene recognition model may include 5 scene data such as x1, x2, x3, x4, and x 5. Wherein the scene data x1 corresponds to scene y1, the scene data x2 corresponds to scene y2, the scene data x3 corresponds to scene y3, the scene data x4 corresponds to scene y4, and the scene data x5 corresponds to scene y 5.
Wherein the scenes y1, y2, y3, y4, y5 are all known scenes to the scene recognition model. The scenes y1, y2, y3, y4 and y5 are all scenes existing in the life of the user, such as walking, running, going upstairs, self-driving travel, getting up, sleeping and the like.
The current scene is a scene unknown to the scene recognition model, that is, the scene recognition model cannot recognize the current scene. After the obtaining module 201 obtains the sensing data of the current scene, the determining module 202 may determine the target scene data from the plurality of scene data according to the sensing data. And the scene corresponding to the target scene data and the current scene are similar scenes. For example, when the current scene is descending stairs, the scene corresponding to the target scene data determined by the determining module 202 may be ascending stairs. For another example, when the current scene is a bus, the scene corresponding to the target scene data determined by the determining module 202 may be a self-driving trip.
It should be noted that the target scene data determined by the determining module 202 may include only scene data of one known scene, or may include scene data of a plurality of known scenes.
A training module 203, configured to train the perception data and the target scene data to obtain a sub-model, where the sub-model is used to identify the current scene.
After the determining module 202 determines the target scene data, the training module 203 may train the sensing data and the target scene data to obtain a sub-model. For example, the training module 203 may train the perception data and the target scene data according to a preset machine learning algorithm. Wherein the sub-model is used for identifying the current scene. The preset Machine learning algorithm may be, for example, a Support Vector Machine (SVM) algorithm.
An updating module 204, configured to add the sub-model to the scene recognition model to update the scene recognition model.
After the training module 203 obtains the sub-model, the updating module 204 adds the sub-model to the scene recognition model to update the scene recognition model. And the updated scene recognition model comprises the sub-model obtained by training. Since the sub-model can identify the current scene, the updated scene identification model can identify the current scene. Thus, the updated scene recognition model can recognize more scenes.
In the embodiment of the application, the electronic equipment can update the scene recognition model through self-adaptive learning, so that the updated scene recognition model can recognize the current unknown scene, and the scene recognition model can recognize more scenes to improve the accuracy of scene recognition of the electronic equipment.
In some embodiments, the determination module 202 is configured to perform the following steps:
clustering the perception data and the scene data to obtain a clustering result;
and determining scene data in the same category as the perception data in the clustering result as target scene data.
After the obtaining module 201 obtains the sensing data of the current scene, the determining module 202 may cluster the sensing data and the plurality of scene data in the scene sensing model to obtain a clustering result. For example, the perception data and the plurality of scene data may be clustered by a K-means clustering algorithm.
Subsequently, the determining module 202 may determine, as target scene data, scene data in the same category as the perception data in the clustering result. For example, the perceptual data of the current scene is x 6. Scene data x1, x2 in the clustering result are in the same category as the perception data x 6. At this time, the determining module 202 may determine the scene data x1, x2 as the target scene data.
For another example, when the clustering result is combined with a specific scene in the life of the user, the current scene is a scene of going down stairs, and the scene data of the scene of going up stairs in the clustering result is in the same category as the perception data of the current scene. Then, the determining module 202 may determine the scene data of the scene of going upstairs as the target scene data.
In some embodiments, the determination module 202 is configured to perform the following steps:
respectively calculating cosine similarity of the perception data and each scene data;
and determining the scene data with the cosine similarity greater than a preset threshold value as target scene data.
After the obtaining module 201 obtains the sensing data of the current scene, the determining module 202 calculates the cosine similarity between the sensing data and each scene data. For example, the determining module 202 may calculate the cosine similarity between the perception data and each scene data by a cosine similarity algorithm.
Subsequently, the determining module 202 determines, according to the result obtained by the calculation, the scene data whose cosine similarity with the sensing data is greater than a preset threshold as target scene data. The preset threshold may be a value preset in the electronic device, for example, 80%.
For example, the perceptual data of the current scene is x 6. In the calculated result, the cosine similarity between the scene data x3 and the perception data x6 is greater than a preset threshold. At this time, the determining module 202 may determine the scene data x3 as the target scene data.
For another example, the cosine similarity between the scene data of the scene of the self-driving trip in the calculation result and the perception data of the current scene is greater than the preset threshold value of 80%, wherein the current scene is taken in the public transport. Then, the determining module 202 may determine the scene data of the scene of the self-driving row as the target scene data.
In some embodiments, the training module 203 is configured to perform the following steps:
determining the perceptual data as a positive sample;
determining the target scene data as a negative sample;
training the positive sample and the negative sample to obtain a support vector machine classifier;
and determining the support vector machine classifier as a sub-model.
After the determining module 202 determines the target scene data, the training module 203 may obtain the sub-models through training. The sub-model is used for identifying the current scene.
When the training module 203 performs training, the obtained perception data is determined as a positive sample, and the determined target scene data is determined as a negative sample. Subsequently, the training module 203 trains the positive samples and the negative samples to obtain a Support Vector Machine (SVM) classifier. Subsequently, the training module 203 determines the trained Support Vector Machine (SVM) classifier as a sub-model.
Because the training module 203 trains the target scene data as a negative sample in the training process, the trained SVM classifier can accurately distinguish the current scene from the similar scene, that is, can accurately distinguish the current scene from the scene corresponding to the target scene data. Thus, the sub-model can be used to accurately identify the current scene.
In some embodiments, referring to fig. 8, fig. 8 is a schematic structural diagram of a second configuration of a scene recognition model updating apparatus provided in an embodiment of the present application.
The scene recognition model updating apparatus 200 further includes: a decision block 205.
A judging module 205, configured to judge whether a current scene is one of a plurality of known scenes;
the obtaining module 201 is configured to obtain the perceptual data of the current scene if the current scene is not any one of the plurality of known scenes.
The electronic equipment is provided with a scene recognition model. The scene recognition model may include scene data for a plurality of known scenes. Wherein each of the scene data corresponds to a known scene. That is, each of the scene data is used to represent a known scene.
When the electronic device executes the function of identifying the current scene, the determining module 205 may first determine whether the current scene is one of a plurality of known scenes. For example, the decision module 205 may invoke the scene recognition model to attempt to recognize the current scene. If the scene recognition model can successfully recognize the current scene, the current scene is one of the plurality of known scenes. If the scene recognition model cannot successfully recognize the current scene, the current scene is not any one of the plurality of known scenes.
If the determining module 205 determines that the current scene is one of the known scenes, the electronic device may terminate the scene recognition procedure.
If the determining module 205 determines that the current scene is not any of the plurality of known scenes, the obtaining module 201 may obtain scene data of the current scene, and continue to perform subsequent processing.
In specific implementation, the modules may be implemented as independent entities, or may be combined arbitrarily and implemented as one or several entities.
As can be seen from the above, in the scene identification model updating apparatus 200 provided in the embodiment of the present application, the obtaining module 201 obtains the sensing data of the current scene; the determining module 202 determines target scene data from a plurality of scene data according to the sensing data, wherein a scene corresponding to the target scene data is a similar scene to the current scene; the training module 203 trains the perception data and the target scene data to obtain a sub-model, wherein the sub-model is used for identifying the current scene; the update module 204 adds the sub-model to the scene recognition model to update the scene recognition model. The scene recognition model updating device can update the scene recognition model through self-adaptive learning, so that the updated scene recognition model can recognize the current unknown scene, and the scene recognition model can recognize more scenes to improve the accuracy of scene recognition of the electronic equipment.
The embodiment of the application also provides the electronic equipment. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a laptop computer, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 9, fig. 9 is a schematic view of a first structure of an electronic device according to an embodiment of the present application.
Electronic device 300 includes, among other things, a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads instructions corresponding to one or more processes of the computer program into the memory 302 according to the following steps, and the processor 301 runs the computer program stored in the memory 302, so as to implement various functions:
acquiring perception data of a current scene;
determining target scene data from a plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene;
training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene;
adding the sub-model to the scene recognition model to update the scene recognition model.
In some embodiments, when determining the target scene data from the plurality of scene data according to the perception data, the processor 301 performs the following steps:
clustering the perception data and the scene data to obtain a clustering result;
and determining scene data in the same category as the perception data in the clustering result as target scene data.
In some embodiments, when determining the target scene data from the plurality of scene data according to the perception data, the processor 301 performs the following steps:
respectively calculating cosine similarity of the perception data and each scene data;
and determining the scene data with the cosine similarity greater than a preset threshold value as target scene data.
In some embodiments, when training the perception data and the target scene data to obtain a sub-model, the processor 301 performs the following steps:
determining the perceptual data as a positive sample;
determining the target scene data as a negative sample;
training the positive sample and the negative sample to obtain a support vector machine classifier;
and determining the support vector machine classifier as a sub-model.
In some embodiments, before obtaining the perceptual data of the current scene, the processor 301 further performs the following steps:
judging whether the current scene is one of the plurality of known scenes;
and if the current scene is not any one of the plurality of known scenes, acquiring the perception data of the current scene.
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and data processing by calling a computer program stored in the memory 302.
In some embodiments, referring to fig. 10, fig. 10 is a schematic view of a second structure of an electronic device provided in an embodiment of the present application.
Wherein, the electronic device 300 further comprises: a display 303, a control circuit 304, an input unit 305, a sensor 306, and a power supply 307. The processor 301 is electrically connected to the display 303, the control circuit 304, the input unit 305, the sensor 306, and the power source 307.
The display screen 303 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 304 is electrically connected to the display 303, and is configured to control the display 303 to display information.
The input unit 305 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. Wherein, the input unit 305 may include a fingerprint recognition module.
The sensor 306 is used to collect information of the electronic device itself or information of the user or external environment information. For example, the sensor 306 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
The power supply 307 is used to power the various components of the electronic device 300. In some embodiments, the power supply 307 may be logically coupled to the processor 301 through a power management system, such that functions of managing charging, discharging, and power consumption are performed through the power management system.
Although not shown in fig. 10, the electronic device 300 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where the electronic device performs the following steps: acquiring perception data of a current scene; determining target scene data from a plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene; training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene; adding the sub-model to the scene recognition model to update the scene recognition model. The electronic equipment can update the scene recognition model through self-adaptive learning, so that the updated scene recognition model can recognize the current unknown scene, and the scene recognition model can recognize more scenes to improve the accuracy of scene recognition of the electronic equipment.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the scene recognition model updating method according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The scene recognition model updating method, the scene recognition model updating device, the storage medium and the electronic device provided by the embodiment of the application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A scene recognition model updating method, wherein the scene recognition model includes scene data of a plurality of known scenes, and the scene recognition model updating method includes:
acquiring perception data of a current scene;
determining target scene data from the plurality of scene data according to the perception data, wherein a scene corresponding to the target scene data is a similar scene with the current scene;
training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene;
adding the sub-model to the scene recognition model to update the scene recognition model.
2. The method of updating a scene recognition model according to claim 1, wherein the step of determining target scene data from the plurality of scene data according to the perception data comprises:
clustering the perception data and the scene data to obtain a clustering result;
and determining scene data in the same category as the perception data in the clustering result as target scene data.
3. The method of updating a scene recognition model according to claim 1, wherein the step of determining target scene data from the plurality of scene data according to the perception data comprises:
respectively calculating cosine similarity of the perception data and each scene data;
and determining the scene data with the cosine similarity greater than a preset threshold value as target scene data.
4. The method of updating a scene recognition model according to claim 1, wherein the training of the perception data and the target scene data to obtain a sub-model comprises:
determining the perceptual data as a positive sample;
determining the target scene data as a negative sample;
training the positive sample and the negative sample to obtain a support vector machine classifier;
and determining the support vector machine classifier as a sub-model.
5. The scene recognition model updating method according to any one of claims 1 to 4, wherein the step of obtaining the perception data of the current scene further comprises:
judging whether the current scene is one of the plurality of known scenes;
and if the current scene is not any one of the plurality of known scenes, acquiring the perception data of the current scene.
6. A scene recognition model updating apparatus, wherein the scene recognition model includes scene data of a plurality of known scenes, the scene recognition model updating apparatus comprising:
the acquisition module is used for acquiring the perception data of the current scene;
a determining module, configured to determine target scene data from the plurality of scene data according to the sensing data, where a scene corresponding to the target scene data is a similar scene to the current scene;
the training module is used for training the perception data and the target scene data to obtain a submodel, wherein the submodel is used for identifying the current scene;
and the updating module is used for adding the sub-model into the scene recognition model so as to update the scene recognition model.
7. The scene recognition model updating apparatus of claim 6, wherein the determining module is configured to:
clustering the perception data and the scene data to obtain a clustering result;
and determining scene data in the same category as the perception data in the clustering result as target scene data.
8. The scene recognition model updating apparatus of claim 6, wherein the determining module is configured to:
respectively calculating cosine similarity of the perception data and each scene data;
and determining the scene data with the cosine similarity greater than a preset threshold value as target scene data.
9. The scene recognition model updating apparatus of claim 6, wherein the training module is configured to:
determining the perceptual data as a positive sample;
determining the target scene data as a negative sample;
training the positive sample and the negative sample to obtain a support vector machine classifier;
and determining the support vector machine classifier as a sub-model.
10. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the scene recognition model updating method according to any one of claims 1 to 5.
11. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the scene recognition model updating method according to any one of claims 1 to 5 by calling the computer program stored in the memory.
CN201910282003.1A 2019-04-09 2019-04-09 Scene recognition model updating method and device, storage medium and electronic equipment Active CN111796663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282003.1A CN111796663B (en) 2019-04-09 2019-04-09 Scene recognition model updating method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282003.1A CN111796663B (en) 2019-04-09 2019-04-09 Scene recognition model updating method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111796663A true CN111796663A (en) 2020-10-20
CN111796663B CN111796663B (en) 2022-08-16

Family

ID=72805675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282003.1A Active CN111796663B (en) 2019-04-09 2019-04-09 Scene recognition model updating method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111796663B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550143A (en) * 2022-04-28 2022-05-27 新石器慧通(北京)科技有限公司 Scene recognition method and device during driving of unmanned vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473570A (en) * 2013-08-16 2013-12-25 电子科技大学 Self-adaptive video scene pedestrian detection method
CN104408475A (en) * 2014-12-08 2015-03-11 深圳市捷顺科技实业股份有限公司 Vehicle license plate identification method and vehicle license plate identification equipment
CN104751198A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Method and device for identifying target object in image
US20170185873A1 (en) * 2014-09-16 2017-06-29 Huawei Technologies Co., Ltd. Data Processing Method and Device
CN108460396A (en) * 2017-09-20 2018-08-28 腾讯科技(深圳)有限公司 The negative method of sampling and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473570A (en) * 2013-08-16 2013-12-25 电子科技大学 Self-adaptive video scene pedestrian detection method
CN104751198A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Method and device for identifying target object in image
US20170185873A1 (en) * 2014-09-16 2017-06-29 Huawei Technologies Co., Ltd. Data Processing Method and Device
CN104408475A (en) * 2014-12-08 2015-03-11 深圳市捷顺科技实业股份有限公司 Vehicle license plate identification method and vehicle license plate identification equipment
CN108460396A (en) * 2017-09-20 2018-08-28 腾讯科技(深圳)有限公司 The negative method of sampling and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550143A (en) * 2022-04-28 2022-05-27 新石器慧通(北京)科技有限公司 Scene recognition method and device during driving of unmanned vehicle

Also Published As

Publication number Publication date
CN111796663B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN111797858A (en) Model training method, behavior prediction method, device, storage medium and equipment
CN111797854B (en) Scene model building method and device, storage medium and electronic equipment
CN111797302A (en) Model processing method and device, storage medium and electronic equipment
CN111798259A (en) Application recommendation method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111797288A (en) Data screening method and device, storage medium and electronic equipment
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111798367A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111796926A (en) Instruction execution method and device, storage medium and electronic equipment
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
CN111753813A (en) Image processing method, device, equipment and storage medium
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797856A (en) Modeling method, modeling device, storage medium and electronic equipment
CN111797986A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797148A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111814812A (en) Modeling method, modeling device, storage medium, electronic device and scene recognition method
CN111796701A (en) Model training method, operation processing method, device, storage medium and equipment
CN111797289A (en) Model processing method and device, storage medium and electronic equipment
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
WO2020207294A1 (en) Service processing method and apparatus, and storage medium and electronic device
CN111797869A (en) Model training method and device, storage medium and electronic equipment
CN111797303A (en) Information processing method, information processing apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant