CN110895670B - Scene recognition method and device - Google Patents

Scene recognition method and device Download PDF

Info

Publication number
CN110895670B
CN110895670B CN201811070478.6A CN201811070478A CN110895670B CN 110895670 B CN110895670 B CN 110895670B CN 201811070478 A CN201811070478 A CN 201811070478A CN 110895670 B CN110895670 B CN 110895670B
Authority
CN
China
Prior art keywords
scene
model
rule
machine learning
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811070478.6A
Other languages
Chinese (zh)
Other versions
CN110895670A (en
Inventor
易斌
高丹
万会
王沅召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201811070478.6A priority Critical patent/CN110895670B/en
Publication of CN110895670A publication Critical patent/CN110895670A/en
Application granted granted Critical
Publication of CN110895670B publication Critical patent/CN110895670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a scene identification method and a scene identification device, wherein the method comprises the following steps: the method comprises the steps of obtaining a first scene outside the safety door through a camera, identifying the first scene according to a trained machine learning model, obtaining a corresponding first control instruction, and subsequently executing operation according to the first control instruction.

Description

Scene recognition method and device
Technical Field
The present application relates to, but not limited to, the field of security, and in particular, to a scene identification method and apparatus.
Background
In the related art, as people pay more and more attention to home security, the smart cat eye is gradually popularized as a home security product. The intelligent cat eye has the functions of liquid crystal visualization, automatic video recording and/or video recording and induction monitoring. The user can check the condition outside the door through the liquid crystal visual function, and can also automatically photograph and keep a file for the visitor so that the user can check the visiting record after going out and returning. The video shot by the smart cat eye contains a lot of information, for example, all actions of visitors outside the door can be shot by the smart cat eye.
Aiming at the problems of limited scene number and low recognition rate of intelligent cat eye recognition in the related technology, no effective solution is available at present.
Disclosure of Invention
The embodiment of the application provides a scene identification method and device, and aims to at least solve the problems that in the related art, the number of scenes identified by an intelligent cat eye is limited, and the identification rate is low.
According to an embodiment of the present application, there is provided a scene recognition method including: acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door; obtaining a first control instruction having an incidence relation with the first scene by using a machine learning model, wherein the machine learning model is obtained by training an original model by using first sample information as input information of the original model, the first sample information comprises a first rule and a plurality of groups of scenes, and the first rule is a rule for identifying the control instruction according to the scenes; and executing the operation corresponding to the first control instruction.
According to another embodiment of the present application, there is also provided an apparatus for recognizing a scene, including: the first acquisition module is used for acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door; a second obtaining module, configured to obtain, by using a machine learning model, a first control instruction having an association relationship with the first scene, where the machine learning model is a model obtained by training an original model using first sample information as input information of the original model, and the first sample information includes a first rule and multiple groups of scenes, where the first rule is a rule for identifying a control instruction according to the scenes; and the control module is used for executing the operation corresponding to the first control instruction.
According to a further embodiment of the present application, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present application, there is also provided an electronic device, comprising a memory having a computer program stored therein and a processor configured to run the computer program to perform the steps of any of the method embodiments described above.
Through the application, the first scene outside the safety door is obtained through the camera, the first scene is identified according to the trained machine learning model, the corresponding first control instruction is obtained, operation is subsequently executed according to the first control instruction, by adopting the scheme, machine learning and big data are fully utilized, automatic identification is carried out on the scene outside the safety door, alarm is actively given or a house owner is contacted in necessary time, the problems that the number of scenes identified by the intelligent cat eye in the related technology is limited, the identification rate is low are solved, the identification number of the scenes is increased, and the identification accuracy is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal of a scene recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of identifying a scene according to an embodiment of the application;
FIG. 3 is a training schematic of a scene recognition model according to another embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example one
The method provided by the first embodiment of the present application can be applied to devices such as a smart cat eye or a security gate, and optionally, the method steps can be executed in a computer terminal or a similar operation device. Taking an example of the application running on a computer terminal, fig. 1 is a hardware structure block diagram of a computer terminal of a scene identification method according to an embodiment of the application. As shown in fig. 1, the computer terminal 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be configured to store software programs and modules of application software, such as program instructions/modules corresponding to the scene recognition method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet via wireless.
In this embodiment, a method for identifying a scene is provided, and fig. 2 is a flowchart of a method for identifying a scene according to an embodiment of the present application, where as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
the camera can be arranged on the cat eye or independently arranged on the monitoring equipment.
Step S204, a machine learning model is used for obtaining a first control instruction which has an incidence relation with the first scene, wherein the machine learning model is obtained by training an original model by using first sample information as input information of the original model, the first sample information comprises a first rule and a plurality of groups of scenes, and the first rule is used for identifying the control instruction according to the scenes;
the first rule may be a multi-layer rule for the original model, and optionally each layer is identified by a first rule for information, such as whether a person or an animal is present in the first layer identification scene of the original model. The second layer of the model is used to identify whether a person in the scene has a lock picking tool, or a suspicious package, etc. The identification rule of each layer may be collectively referred to as a first rule.
In step S206, an operation corresponding to the first control instruction is executed.
The control command may be an alarm, a notification to the owner of the house, or the like, or may be a command to open the security gate if the scene is recognized as authorized, for example, a series of gestures is detected when someone is outside the security gate, and the security gate is authorized to enter the house after the authentication is passed.
Certainly, the scheme of the application is not limited to the problem of house safety, and can also be applied to similar equipment such as a safe case and the like.
Through the steps, the first scene outside the safety door is obtained through the camera, the first scene is identified according to the trained machine learning model, the corresponding first control instruction is obtained, and the operation is executed according to the first control instruction subsequently.
Optionally, obtaining a first control instruction associated with the first scenario by using a machine learning model includes: obtaining the scene classification to which the first scene belongs through the machine learning model; and taking the control instruction corresponding to the scene classification as the first control instruction.
The scene classification described above may be a classification according to a plurality of sets of scene classifications in the first sample information.
Optionally, obtaining, by the machine learning model, a scene classification to which the first scene belongs includes: acquiring a first image characteristic included in the first scene and acquiring a second image characteristic of a second scene in the first sample information; and when the feature similarity between the first image feature and the second image feature is greater than a threshold value, taking the scene classification to which the second scene belongs as the scene classification to which the first scene belongs.
Optionally, obtaining, by the machine learning model, a scene classification to which the first scene belongs includes: under the condition that the scene classification which belongs to the first scene cannot be acquired, outputting the first scene to a user interface; and acquiring a classification instruction through the user interface, and acquiring a first classification and a first control instruction to which the first scene belongs according to the classification instruction.
Optionally, after a classification instruction is obtained through the user interface, and a first classification and a first control instruction to which the first scene belongs are obtained according to the classification instruction, the machine learning model is updated according to the first classification and the first control instruction as input information of the machine learning model.
Optionally, before the obtaining of the first control instruction having an association relationship with the first scenario by using the machine learning model, the method includes:
step one, a plurality of groups of scenes of the first sample information are used as input information and input into the original model, and the plurality of groups of scenes are identified in the original model according to the first rule to obtain an identification result;
checking the recognition result according to the control instructions actually corresponding to the multiple groups of scenes of the first sample information, and acquiring the recognition quasi-group rate;
step three, under the condition that the identification accuracy is lower than a threshold value, adjusting the original model;
and step four, repeatedly executing the step one to the step three until the identification accuracy is higher than a threshold value, and outputting a model as the machine learning model.
The original model is adjusted as described above, and not the first rule, but the coefficients of each layer of the original model are adjusted.
The following description is made in conjunction with another embodiment of the present application.
The intelligent security identification system in the related technology is based on the premise assumption that the identification category is priori knowledge. In the real world, however, recognition scenarios vary widely, with thousands of different recognition scenarios encompassing countless recognition classes. Even in a certain scenario, a specific recognition category can be defined, and an occurrence of an event such as an abnormality is inevitable. When training a security system model, it is not possible to cover all application scenarios and user actions. Therefore, the recognition rate of the intelligent cat eye is not high in the user action or application scene except the prior knowledge.
The deep neural network is expanded by modifying the deep neural network classification weight transfer matrix, so that the identification number is dynamically increased, new identification categories can be dynamically added according to different application scenes and aiming at different user actions, and the intelligent cat eye identification scene is closer to a real intelligent cat eye identification scene. Specifically, the method comprises the following steps:
the method comprises the steps of firstly, reducing the requirement of incremental learning on newly added category samples by transferring the global classification knowledge and the related inter-category contact information to newly added weight lists, and finishing training of the deep classification model expansion recognition categories by using a small number of manually labeled samples, thereby reducing the cost of manual labeling and the cost of network updating.
And secondly, reducing the performance jitter of the model by introducing a balance training method and a different speed training method, and accelerating the incremental training speed. Along with continuous iteration of the model and continuous expansion of the depth model, the total number of identification categories is increased one by one in the whole system, more and more identification samples can be identified, the overall performance is continuously improved, and unknown categories which are not predefined in the open set identification problem are treated according to the overall performance.
For example, in an application scene of a home residence, a body of a user is shaped like a Chinese character 'da', namely, a representative is a residence owner, the Chinese character 'da' shaped movement formed by the user in the residence scene is defined as a training sample of a defined category, and the user stands upright in an office area scene is defined as an undefined category sample.
Fig. 3 is a schematic diagram of training a scene recognition model according to another embodiment of the present application, and as shown in fig. 3, machine learning is a loop process, in one iteration, input information includes an unknown class sample and a known class sample, the input information is recognized by a machine learning model (also referred to as a deep neural network), the known class sample is accepted, the unknown sample is rejected after the unknown class sample is detected, then the unknown class sample is manually labeled, incremental learning is subsequently performed according to the manual labeling, and the machine learning model is readjusted to enter the next use.
And training the deep neural network classification model with the initial fixed recognition class number by using the predefined known class samples under the specific recognition scene, namely the model to be expanded. And training the classification model to be expanded by using a sample set containing the defined class samples, and obtaining the classification threshold information of the classification model to be expanded. And sending a sample set containing undefined class samples into a classification model to be expanded, and determining at least part of the undefined class samples according to the classification threshold information of the classification model to be expanded. The manual labeling at least partially defines a class sample. And increasing the number of columns of the weight transfer matrix in a classification layer of the deep neural network so as to increase the total number of the model identification classes, wherein the increased columns of the weight contain first information related to global classification and second information related to the connection between the classes. And training the updated model by using the undefined class sample increment marked manually in the previous step.
The process of determining undefined class samples comprises extracting the feature activation values of the correct classification samples at the deep neural network classification layer and calculating the acceptance threshold, the rejection threshold and the distance threshold of each class of known classes in turn. And detecting unknown class samples which simultaneously contain the unknown class samples and are not predefined in the known and unknown class sample images under the real recognition scene by using the multi-class threshold. And manually labeling the detected unknown class samples. And updating the classification layer of the deep neural network, and increasing the number of the corresponding weight transfer matrix columns so as to expand the total number of the model identification classes. And initializing the newly added weight value column by adopting a reinforced initialization method, and simultaneously migrating global classification knowledge (namely, first information) and inter-class contact information (namely, second information) of the deep network to the newly added weight value column so as to reduce the manual labeling requirement. And training the updated classification model by using the artificially marked unknown class sample increment, and ensuring that each class of known class and the newly added class have the same sample number by adopting a balance training method. Meanwhile, a different-speed training method is adopted, so that the learning rate of the newly added sample is doubled and faster than that of the known class, and the incremental training is completed quickly. And after the model is updated stably, adding a class of recognition classes into the known class pool. It is understood that a plurality of undefined classes may be identified in one iteration process, and the number of columns of the weight transfer matrix added in the classification layer of the deep neural network is correspondingly multiple columns in one updating process of the model.
Regarding the first information, the convolutional neural network has high invariance to translation, scaling, tilting or other forms of deformation, and depth features are obtained through network layer convolution and downsampling by extracting features such as grammars, angular points and the like from an input picture. Based on the network parameters such as a back propagation algorithm and an expansion algorithm thereof, and the like, the network parameters are trained to be converged, and the network layer parameter space stores global knowledge such as gestures, user body characteristic transformation and the like in the recognition domain. The deep neural network classification model performs classification based on these knowledge.
There may be a plurality of kinds regarding similar information or contact information between categories, i.e. second information, which may be, for example, similar gestures, similar physical characteristics, etc.
By adopting the scheme, the problem that the security effect of a user is poor due to the fact that the recognition rate of the intelligent cat eye is low in the related technology is solved. The undefined user action under different scenes can be dynamically added, the identification rate of the intelligent cat eye is improved, and the security effect is improved.
According to the scheme, the newly added weight list of the undefined user actions in different scenes and the weight list of adding the undefined user actions in the known scenes to the deep neural network model during model training is added, so that the undefined user actions in different scenes can be dynamically added, the identification rate of the intelligent cat eye is improved, and the security effect is improved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example two
In this embodiment, a scene recognition apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
According to another embodiment of the present document, there is also provided an apparatus for recognizing a scene, including:
the first acquisition module is used for acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
a second obtaining module, configured to obtain, by using a machine learning model, a first control instruction having an association relationship with the first scene, where the machine learning model is a model obtained by training an original model using first sample information as input information of the original model, and the first sample information includes a first rule and multiple groups of scenes, where the first rule is a rule for identifying a control instruction according to the scenes;
and the control module is used for executing the operation corresponding to the first control instruction.
The method comprises the steps of obtaining a first scene outside the safety door through a camera, identifying the first scene according to a trained machine learning model, obtaining a corresponding first control instruction, and subsequently executing operation according to the first control instruction.
Optionally, the second obtaining module is further configured to obtain, through the machine learning model, a scene classification to which the first scene belongs; and the control instruction corresponding to the scene classification is the first control instruction.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
EXAMPLE III
Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
s2, obtaining a first control instruction having an incidence relation with the first scene by using a machine learning model, wherein the machine learning model is obtained by training an original model by using first sample information as input information of the original model, the first sample information comprises a first rule and a plurality of groups of scenes, and the first rule is used for identifying the control instruction according to the scenes;
and S3, executing the operation corresponding to the first control instruction.
Optionally, in this embodiment, the storage medium may include but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store program codes.
Embodiments of the present application further provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
s2, obtaining a first control instruction having an incidence relation with the first scene by using a machine learning model, wherein the machine learning model is obtained by training an original model by using first sample information as input information of the original model, the first sample information comprises a first rule and a plurality of groups of scenes, and the first rule is a rule for identifying the control instruction according to the scenes;
and S3, executing the operation corresponding to the first control command.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed out of order, or separately as integrated circuit modules, or multiple modules or steps thereof may be implemented as a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A method for identifying a scene, comprising:
acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
obtaining a first control instruction having an incidence relation with the first scene by using a machine learning model, wherein the machine learning model is obtained by training an original model by using first sample information as input information of the original model, the first sample information comprises a first rule and a plurality of groups of scenes, and the first rule is a rule for identifying the control instruction according to the scenes;
executing the operation corresponding to the first control instruction;
the first rule is a multi-layer rule for the original model, each layer of the original model identifies information according to one first rule, the first layer of the original model identifies whether characters exist in the multiple groups of scenes according to the corresponding first rule, and the second layer of the original model identifies whether characters in the multiple groups of scenes have lock picking tools or suspicious packages according to the corresponding first rule;
before the machine learning model is used to obtain the first control instruction having an association relationship with the first scenario, the method further includes:
step one, inputting a plurality of groups of scenes of the first sample information as input information into the original model, and identifying the plurality of groups of scenes in the original model according to the first rule to obtain an identification result;
checking the recognition result according to the control instructions actually corresponding to the multiple groups of scenes of the first sample information, and acquiring the recognition accuracy;
step three, under the condition that the identification accuracy is lower than a threshold value, adjusting the original model;
step four, the step one to the step three are repeatedly executed until the identification accuracy is higher than a threshold value, and a model is output to serve as the machine learning model;
wherein adjusting the original model comprises: adjusting coefficients for each layer of the original model.
2. The method of claim 1, wherein obtaining a first control instruction associated with the first scenario using a machine learning model comprises:
obtaining a scene classification to which the first scene belongs through the machine learning model;
and taking the control instruction corresponding to the scene classification as the first control instruction.
3. The method of claim 2, wherein obtaining, by the machine learning model, a scene classification to which the first scene belongs comprises:
acquiring a first image characteristic included in the first scene and acquiring a second image characteristic of a second scene in the first sample information;
and when the feature similarity between the first image feature and the second image feature is larger than a threshold value, taking the scene classification to which the second scene belongs as the scene classification to which the first scene belongs.
4. The method of claim 2, wherein obtaining, by the machine learning model, a scene classification attributed to the first scene comprises:
outputting the first scene to a user interface under the condition that the scene classification belonging to the first scene cannot be acquired;
and acquiring a classification instruction through the user interface, and acquiring a first classification and a first control instruction to which the first scene belongs according to the classification instruction.
5. The method according to claim 4, wherein the obtaining of the classification instruction through the user interface, and after obtaining the first classification to which the first scene belongs and the first control instruction according to the classification instruction, comprises:
and updating the machine learning model according to the first classification and the first control instruction as input information of the machine learning model.
6. An apparatus for recognizing a scene, comprising:
the first acquisition module is used for acquiring a first scene outside a safety door through a camera, wherein the camera is arranged on the safety door;
a second obtaining module, configured to obtain, by using a machine learning model, a first control instruction having an association relationship with the first scene, where the machine learning model is a model obtained by training an original model using first sample information as input information of the original model, and the first sample information includes a first rule and multiple groups of scenes, where the first rule is a rule for identifying a control instruction according to the scenes;
the control module is used for executing the operation corresponding to the first control instruction;
the first rule is a multi-layer rule for the original model, each layer of the original model identifies information according to one first rule, the first layer of the original model identifies whether characters exist in the multiple groups of scenes according to the corresponding first rule, and the second layer of the original model identifies whether characters in the multiple groups of scenes have lock picking tools or suspicious packages according to the corresponding first rule; a model training module, configured to train a machine learning model before the second obtaining module obtains the first control instruction having an association relationship with the first scene using the machine learning model, by:
step one, inputting a plurality of groups of scenes of the first sample information as input information into the original model, and identifying the plurality of groups of scenes in the original model according to the first rule to obtain an identification result;
checking the recognition result according to the control instructions actually corresponding to the multiple groups of scenes of the first sample information to obtain the recognition accuracy;
step three, under the condition that the identification accuracy is lower than a threshold value, adjusting the original model;
step four, the step one to the step three are repeatedly executed until the identification accuracy is higher than a threshold value, and a model is output to serve as the machine learning model;
wherein adjusting the original model comprises: adjusting coefficients for each layer of the original model.
7. The apparatus of claim 6, wherein the second obtaining module is further configured to obtain, through the machine learning model, a scene classification to which the first scene belongs; and the control instruction corresponding to the scene classification is the first control instruction.
8. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN201811070478.6A 2018-09-13 2018-09-13 Scene recognition method and device Active CN110895670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811070478.6A CN110895670B (en) 2018-09-13 2018-09-13 Scene recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811070478.6A CN110895670B (en) 2018-09-13 2018-09-13 Scene recognition method and device

Publications (2)

Publication Number Publication Date
CN110895670A CN110895670A (en) 2020-03-20
CN110895670B true CN110895670B (en) 2022-09-09

Family

ID=69785726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811070478.6A Active CN110895670B (en) 2018-09-13 2018-09-13 Scene recognition method and device

Country Status (1)

Country Link
CN (1) CN110895670B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN106997629A (en) * 2017-02-17 2017-08-01 北京格灵深瞳信息技术有限公司 Access control method, apparatus and system
CN107480660A (en) * 2017-09-30 2017-12-15 深圳市锐曼智能装备有限公司 Dangerous goods identifying system and its method
CN107506799A (en) * 2017-09-01 2017-12-22 北京大学 A kind of opener classification based on deep neural network is excavated and extended method and device
CN108268850A (en) * 2018-01-24 2018-07-10 成都鼎智汇科技有限公司 A kind of big data processing method based on image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154124A (en) * 2016-03-03 2017-09-12 美的集团股份有限公司 Control method, server, doorbell and the door bell and button system of doorbell
CN107506755B (en) * 2017-09-26 2020-08-18 云丁网络技术(北京)有限公司 Monitoring video identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN106997629A (en) * 2017-02-17 2017-08-01 北京格灵深瞳信息技术有限公司 Access control method, apparatus and system
CN107506799A (en) * 2017-09-01 2017-12-22 北京大学 A kind of opener classification based on deep neural network is excavated and extended method and device
CN107480660A (en) * 2017-09-30 2017-12-15 深圳市锐曼智能装备有限公司 Dangerous goods identifying system and its method
CN108268850A (en) * 2018-01-24 2018-07-10 成都鼎智汇科技有限公司 A kind of big data processing method based on image

Also Published As

Publication number Publication date
CN110895670A (en) 2020-03-20

Similar Documents

Publication Publication Date Title
US11087180B2 (en) Risky transaction identification method and apparatus
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
US10970719B2 (en) Fraudulent transaction identification method and apparatus, server, and storage medium
US20210334604A1 (en) Facial recognition method and apparatus
CN110489951A (en) Method, apparatus, computer equipment and the storage medium of risk identification
WO2020238353A1 (en) Data processing method and apparatus, storage medium, and electronic apparatus
US10692089B2 (en) User classification using a deep forest network
CN112418360B (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
EP3648015B1 (en) A method for training a neural network
CN113780243B (en) Training method, device, equipment and storage medium for pedestrian image recognition model
CN110782333A (en) Equipment risk control method, device, equipment and medium
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
EP3983953A1 (en) Understanding deep learning models
CN110610191A (en) Elevator floor identification method and device and terminal equipment
CN113379045B (en) Data enhancement method and device
CN110895602A (en) Identity authentication method and device, electronic equipment and storage medium
CN110895670B (en) Scene recognition method and device
CN115700845A (en) Face recognition model training method, face recognition device and related equipment
CN110162957A (en) Method for authenticating and device, storage medium, the electronic device of smart machine
CN115578765A (en) Target identification method, device, system and computer readable storage medium
CN114764593A (en) Model training method, model training device and electronic equipment
CN114387480A (en) Portrait scrambling method and device
CN111931148A (en) Image processing method and device and electronic equipment
CN116778534B (en) Image processing method, device, equipment and medium
CN117152567B (en) Training method, classifying method and device of feature extraction network and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant