CN110059594B - Environment perception self-adaptive image recognition method and device - Google Patents

Environment perception self-adaptive image recognition method and device Download PDF

Info

Publication number
CN110059594B
CN110059594B CN201910261185.4A CN201910261185A CN110059594B CN 110059594 B CN110059594 B CN 110059594B CN 201910261185 A CN201910261185 A CN 201910261185A CN 110059594 B CN110059594 B CN 110059594B
Authority
CN
China
Prior art keywords
model
scene
detected
image
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910261185.4A
Other languages
Chinese (zh)
Other versions
CN110059594A (en
Inventor
杨�一
宋扬
陈雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910261185.4A priority Critical patent/CN110059594B/en
Publication of CN110059594A publication Critical patent/CN110059594A/en
Application granted granted Critical
Publication of CN110059594B publication Critical patent/CN110059594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an environment-aware adaptive image recognition method and device, wherein the method comprises the following steps: a model obtaining step, namely obtaining a plurality of models, wherein the plurality of models are respectively suitable for different scenes; a bottom library presetting step, namely performing characteristic extraction on each image in the bottom library through each model respectively to obtain bottom library characteristics, and storing the bottom library characteristics into each model bottom library corresponding to each model respectively; an information acquisition step, namely acquiring an image to be detected; a scene judging step, namely judging the scene type of the image to be detected, and selecting one or more models as judging models according to the scene type; a feature extraction step, namely extracting the to-be-detected features of the to-be-detected image through one or more judgment models; and information comparison step, comparing the characteristics to be detected with the characteristics of the bottom library in the model bottom library, and judging whether the characteristics are matched. The accuracy of an image recognition result is improved by the environment perception self-adaptive image recognition method.

Description

Environment perception self-adaptive image recognition method and device
Technical Field
The disclosure relates to the field of image recognition, in particular to an environment perception self-adaptive image recognition method and device.
Background
With the rapid development of computer science in the field of human-computer interaction, the application of image recognition technology is generally regarded. The image recognition technology comprises two stages of feature extraction and feature comparison, wherein traditionally, a unified model is adopted to extract image features under different conditions, and then the extracted image features are compared with features to be compared to realize the recognition of the image.
Disclosure of Invention
In order to overcome the problems in the prior art, the present disclosure provides an environment-aware adaptive image recognition method and apparatus.
In a first aspect, an embodiment of the present disclosure provides an environment-aware adaptive image recognition method, which includes a model obtaining step of obtaining a plurality of models, where the plurality of models are respectively adapted to different scenes; a bottom library presetting step, namely performing characteristic extraction on each image in the bottom library through each model respectively to obtain bottom library characteristics, and storing the bottom library characteristics into each model bottom library corresponding to each model respectively; an information acquisition step, namely acquiring an image to be detected; a scene judging step, namely judging the scene type of the image to be detected, and selecting one or more models as judging models according to the scene type; a feature extraction step, namely extracting the to-be-detected features of the to-be-detected image through one or more judgment models; and information comparison step, comparing the characteristics to be detected with the characteristics of the bottom library in the model bottom library, and judging whether the characteristics are matched.
In one example, the model comprises a scene single-dimensional adaptation model or a scene multi-dimensional adaptation model, and the scene single-dimensional adaptation model adapts to scene types divided by a scene distinguishing factor; the scene multi-dimensional adaptation model adapts to scene types partitioned by a plurality of scene differentiation factors.
In one example, the scene discrimination factor includes: time, light, density of people stream, weather, region.
In one example, the scene determining step determines the scene type according to the scene discrimination factor.
In one example, in the information comparing step: the model base library is a model base library corresponding to the judgment model.
In one example, in the information comparing step: the model base library is all model base libraries.
In one example, an environment-aware adaptive image recognition method further comprises: and an alarming step, namely, according to the matching result, when at least one bottom library characteristic is matched with the characteristic to be detected, the alarming step sends out an alarming signal.
In one example, the scene determining step further includes: judging the scene type in real time; the information acquisition step further includes: acquiring an image to be detected through real-time acquisition; and adjusting the acquisition parameters according to the scene type to adapt to the corresponding scene type.
In one example, an environment-aware adaptive image recognition method further comprises: and a storage step, storing the characteristics to be detected in a model characteristic library corresponding to the judgment model.
In one example, the storing step further comprises: and storing the image to be detected corresponding to the characteristic to be detected.
In a second aspect, the disclosed embodiments provide an environment-aware adaptive image recognition apparatus having a function of implementing the environment-aware adaptive image recognition method according to the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one example, an environment-aware adaptive image recognition apparatus includes: the model obtaining module is used for obtaining a plurality of models, and the models are respectively suitable for different scenes; the bottom library presetting module is used for respectively carrying out feature extraction on each image in the bottom library through a plurality of models to obtain bottom library features, and respectively storing the bottom library features into a plurality of model bottom libraries corresponding to the models; the information acquisition module is used for acquiring an image to be detected; the scene judging module is used for judging the scene type of the image to be detected and selecting one or more models as judging models according to the scene type; the characteristic extraction module is used for extracting the to-be-detected characteristics of the to-be-detected image through one or more judgment models; and the information comparison module is used for comparing the characteristics to be detected with the characteristics of the bottom library in the model bottom library and judging whether the characteristics are matched with the characteristics of the bottom library.
In a third aspect, an embodiment of the present disclosure provides an electronic device, where the electronic device includes: a memory to store instructions; and a processor for invoking the instructions stored by the memory to perform the context-aware adaptive image recognition method.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform a method of context-aware adaptive image recognition.
According to the method and the device for recognizing the environment-aware self-adaptive image, on one hand, the method for recognizing the environment-aware self-adaptive image respectively performs feature extraction on each image in a bottom library through a plurality of models to obtain bottom library features, and then the features of each image in the bottom library in different scene types are obtained; on the other hand, the environment perception self-adaptive image recognition method also adaptively adjusts and extracts a model of the feature to be detected of the image to be detected according to the change characteristics of the scene types, so that the accuracy of feature extraction to be detected under the corresponding scene types is improved, the feature to be detected extracted according to the characteristics of the corresponding scene types is compared with the characteristics of the base library image in the corresponding scene types, the accuracy of an image recognition result is finally improved, and the occurrence of false recognition rate and missing recognition rate is effectively avoided.
Drawings
The above and other objects, features and advantages of the embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a diagram illustrating an environment-aware adaptive image recognition method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating another environment-aware adaptive image recognition method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an environment-aware adaptive image recognition apparatus provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating another environment-aware adaptive image recognition apparatus provided by an embodiment of the present disclosure;
fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
It should be noted that, although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present disclosure, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
Fig. 1 is a schematic diagram of an environment-aware adaptive image recognition method 10 according to an embodiment of the present disclosure. As shown in fig. 1, the context-aware adaptive image recognition method 10 includes a model obtaining step 110, a base library presetting step 120, an information obtaining step 130, a scene judging step 140, a feature extracting step 150, and an information comparing step 160. The respective steps in fig. 1 are explained in detail below.
In the model obtaining step 110, a plurality of models may be obtained in advance, and the plurality of models are adapted to different scenes respectively. In practical applications, as the types of scenes increase, the models adapted to the scenes also increase correspondingly. For example, the scene may be a daytime scene, a nighttime scene, a high-density people stream scene, and a low-density people stream scene, and thus the obtained corresponding models are a daytime model, a nighttime model, a high-density people stream model, and a low-density people stream model; the model can also be set up accordingly as: the model comprises a daytime high-density people flow model, a daytime low-density people flow model, a night high-density people flow model and a night low-density people flow model. It should be noted that different models are trained by different training samples. For example, the daytime high-density model is trained with daytime high-density images, and in the training process, a loss function is calculated by using the difference between the calculated classification and the actual classification, and the calculated classification result is corrected according to the loss function, so that the training model is continuously optimized.
In addition, the models used in the respective scenes are different, and the difference is mainly reflected in the size of the models. The large model applies a deep model without pruning, for example, the large model of resnet101 is applied in a night scene, and the component superposition after modification is deeper; for the identification of daytime scenes, we can apply a reconstruction model based on resnet50, and in addition, we can distill models for different subdivided scenes with a large model.
And a bottom library presetting step 120, performing feature extraction on each image in the bottom library through each model to obtain bottom library features, and storing the bottom library features in each model bottom library corresponding to each model. If all the models acquired during the acquire model step 110 are a day model, a night model, a high density traffic model, and a low density traffic model, respectively, then, in the bottom library presetting step 120, the environmental awareness adaptive image recognition method 10 may extract the features of each image in the bottom library by using a day model, a night model, a high density people stream model and a low density people stream model respectively, the method comprises the steps of forming base library characteristics of each image under a corresponding scene, storing the base library characteristics of a daytime scene of each image into a model base library of the daytime scene, storing the base library characteristics of a nighttime scene into a model base library of the nighttime scene, storing the base library characteristics of a high-density people stream scene into a model base library of the high-density people stream scene, and storing the base library characteristics of a low-density people stream scene into a model base library of the low-density people stream scene.
An information acquisition step 130, acquiring an image to be measured. The information obtaining step 130 may obtain the image to be measured stored in the image storage device through the image storage device, or may perform real-time acquisition through the monitoring device.
And a scene judging step 140, judging the scene type of the image to be detected, and selecting one or more models as judgment models according to the scene type. For example, in the scene determining step 140, it is determined that the scene type of the image to be measured is a daytime scene, and therefore the daytime model is selected as the determining model according to the daytime scene; if the scene type of the image to be detected is judged to be the daytime high-density people stream scene through the scene judging step 140, the daytime model and the high-density people stream model can be respectively selected as judging models according to the daytime high-density people stream scene; or a daytime high-density people flow model can be selected as a judgment model according to the setting of the actual model.
And a feature extraction step 150, extracting the to-be-detected features of the to-be-detected image through one or more judgment models. For example, in the scene determination step 140, a daytime model and a high-density people stream are selected as the determination models, and in the feature extraction step 150, the to-be-measured features of the to-be-measured image are extracted through the daytime model and the high-density people stream model, respectively.
And an information comparison step 160, comparing the features to be detected with the features in the model base library, and judging whether the features are matched with the features in the model base library. In the information comparison step 160, the features to be measured are respectively compared with the bottom library features of the images in the model bottom library, so as to determine whether the features to be measured are matched with the bottom library features of the images in the model bottom library.
According to the environment-aware adaptive image recognition method 10 provided by the present disclosure, on one hand, the environment-aware adaptive image recognition method 10 performs feature extraction on each image in a base library through each model to obtain base library features, that is, features of each image in the base library in different scene types are obtained; on the other hand, the environment-aware adaptive image recognition method 10 also adaptively adjusts and extracts a model of the feature to be detected of the image to be detected according to the change characteristics of the scene type, so as to improve the accuracy of feature extraction under the corresponding scene type, and compares the feature to be detected extracted according to the characteristics of the corresponding scene type with the feature of the base library image in the corresponding scene type, so as to finally improve the accuracy of the image recognition result, and effectively avoid the occurrence of false recognition rate and missing recognition rate.
In one example, the model comprises a scene single-dimensional adaptation model or a scene multi-dimensional adaptation model, wherein the scene single-dimensional adaptation model adapts to scene types divided by a scene differentiating factor; the scene multi-dimensional adaptation model adapts to scene types partitioned by a plurality of scene differentiation factors.
The model is divided into a scene single-dimensional adaptation model and a scene multi-dimensional adaptation model, and in the model obtaining stage 110, the environmental awareness adaptive image recognition method 10 may determine whether the model selects the scene single-dimensional adaptation model or the scene multi-dimensional adaptation model according to the characteristics of the scene of the implementation place.
When the scene of the implementation place contains a small number of scene distinguishing factors, in the model obtaining stage 110, the obtained model can be set as a scene single-dimensional adaptive model, and by setting a single-dimensional model, for example, a day model and a night model according to light rays, and a high-density people flow model and a low-density people flow model according to people flow, the execution modes of the environment-aware adaptive image recognition method 10 in the feature extraction step 150 and the information comparison step 160 can be made simpler and more efficient; when the scene of the implementation place contains more scene differentiation factors, in the model obtaining stage 110, the obtained model may be set as a scene multidimensional adaptation model, and several scene single-dimensional adaptation models may be merged and processed into one scene multidimensional adaptation model by setting the multidimensional model, for example, according to the light and the traffic, the model is divided into a daytime high-density traffic model, a daytime low-density traffic model, a nighttime high-density traffic model, and a nighttime low-density traffic model. By setting the scene multidimensional adaptation model, the environment-aware adaptive image recognition method 10 can effectively reduce the storage space occupied by the model on the one hand, and on the other hand, when the scene of the implementation place contains several scene distinguishing factors, the scene multidimensional adaptation model can extract the feature to be detected which needs to be extracted by the scene single-dimensional adaptation model in batches at one time, so that the extraction speed of the feature to be detected and the subsequent information comparison speed can be improved.
Scene discrimination factors may include time, light, density of people flowing, weather, and region, among others. Taking the time-scene differentiating factor as an example, the scene single-dimensional adaptive model can be divided into a day model and a night model, wherein the time points for dividing the day and the night can be set artificially according to the specific situation of the implementation site.
If the time scene distinguishing factor and the people stream density scene distinguishing factor are taken as examples, the scene multi-dimensional adaptation model can be divided into a daytime high-density people stream model, a daytime low-density people stream model, a nighttime high-density people stream model and a nighttime low-density people stream model, wherein the division point of the people flow for dividing the high-density people stream and the low-density people stream can be set manually according to the specific conditions of the implementation site.
In one example, the scene determining step 140 can determine the scene type according to the scene discrimination factor. The scene type is judged through the scene judgment factor, and the complex scene can be divided into different clear scene types.
In one example, in the information comparing step 160: the model base library is a model base library corresponding to the judgment model. That is, in the information comparing step 160, the bottom library features in the model bottom library corresponding to the determination model are called according to the determination model corresponding to the feature to be measured, and are used for comparing with the feature to be measured. For example, taking the determination model corresponding to the feature to be detected as the daytime model as an example, in the information comparing step 160, the environmental awareness adaptive image recognition method 10 compares the feature to be detected with the feature to be detected by retrieving the feature of the bottom library extracted from the image in the bottom library according to the daytime model, that is, retrieving the feature of the bottom library in the model bottom library corresponding to the daytime model, and determines whether the feature to be detected matches the feature of the bottom library of a certain image in the model bottom library.
In the information comparison step 160, the model base is a model base corresponding to the judgment model, that is, the base features in the model base corresponding to the judgment model are called according to the judgment model corresponding to the feature to be detected, and then compared with the feature to be detected, so that the computation amount of the environment-aware adaptive image recognition method 10 can be reduced, and the information comparison speed can be increased.
In one example, in the information comparison step 160, the model base library is all of the model base libraries. That is, in the information comparison step 160, the feature to be tested is compared with the library features in the entire model library in parallel. The features to be detected can be compared and analyzed more comprehensively by performing retrieval analysis on all model bases in parallel, so that the missing rate of the environment perception self-adaptive image recognition method 10 in the information comparison process is reduced. For example, taking the models corresponding to the characteristics of the bottom library as a daytime model, a nighttime model, a high-density people flow model and a low-density people flow model, respectively, the characteristics of the bottom library of the corresponding models, that is, the characteristics of the bottom library of the daytime model, the characteristics of the bottom library of the nighttime model, the characteristics of the bottom library of the high-density people flow and the characteristics of the bottom library of the low-density people flow, are formed in the bottom library presetting step 120. In the information comparison step 160, the environmental awareness adaptive image recognition method 10 compares the feature to be detected with the feature of the day model base, the feature of the night model base, the feature of the high-density people stream base, and the feature of the low-density people stream base of each image in the base, and determines whether the feature to be detected matches with the base feature of a certain image corresponding to a model.
In the information comparison step 160, the model base is the entire model base, that is, the features to be measured are compared with the base features in the entire model base, so that the overall comparison between the features to be measured and the base features is realized, that is, the comparison between the features to be measured and each image in the base is realized in each scene, and the missing rate of the environment-aware adaptive image recognition method 10 in the information comparison process is further reduced.
In one example, the adaptive image recognition method 10 further includes an alarm step 170, and the alarm step 170 sends an alarm signal when there is at least one bottom library feature matching the feature to be detected according to the matching result determined in the information comparison step 160. In the information comparison stage 160, when at least one bottom library feature is found, or when one bottom library feature of a certain image is matched with a feature to be detected, an alarm signal is generated immediately in the alarm step 170, on one hand, the missing rate of the environment-aware adaptive image recognition method 10 can be effectively reduced, and on the other hand, the environment-aware adaptive image recognition method 10 sends out an alarm signal in time through the alarm step 170 so as to attract the attention of a user or others.
In one example, the scene determining step 140 further includes determining the scene type in real time, and the scene determining step 140 may also determine the scene type in real time through a time period setting step (not shown in the figure), that is, through the time period setting step, the implementation site is divided into different scene types according to time, when the scene type at the corresponding time is implemented, the scene determining step 140 may directly make a determination to determine the scene type, and further select one or more models as the determination model according to the scene type. The specific setting of the time period setting step can be determined by person according to the implementation situation. For example, six to seven am may be set as the daytime model, and seven pm (not including seven am) to six am (not including six am) may be set as the nighttime model. In the actual operation process of the system, the time period judging step automatically switches the model according to the current system clock, and sets the current operation model to the system global variable, namely, the model is placed in the etcd for distributed storage, so that the coordination of all servers is consistent, and the configuration of the etcd automatic distribution system is changed to all cluster servers.
The information obtaining step 130 further includes obtaining the image to be measured by real-time collection, and adjusting the collection parameters according to the scene type to adapt to the corresponding scene type. The information obtaining step 130 may be performed by a front-end image capturing device, and the front-end device may be a photographing device or a video capturing device. The acquisition parameters may include the sharpness of the image, the exposure of the image, the viewing angle of the image, etc.
In the information acquisition process, the acquisition parameters are adjusted according to the scene types to adapt to the corresponding scene types, so that the acquired images can better meet the requirements of the environment-aware adaptive image identification method 10 on the acquired images when the method is implemented in any scene type. For example, when the scene type of the implementation place is a night scene, in the information acquisition process, the context-aware adaptive image recognition method 10 may adjust the exposure level of the acquired picture, that is, increase the exposure level of the acquired image to make the acquired image clearer so as to meet the requirement of image acquisition.
Fig. 2 is a schematic diagram of another environment-aware adaptive image recognition method 10 according to an embodiment of the present disclosure. As shown in fig. 2, in one example, the context-aware adaptive image recognition method 10 further includes: and a storage step 180, storing the characteristics to be detected in a model characteristic library corresponding to the judgment model. When the images in the base library are increased, correspondingly, in the base library presetting step 120, feature extraction needs to be performed on the newly increased images so as to obtain base library features of the newly increased images, the features to be detected are stored in the corresponding model feature library, and the features to be detected can be extracted again to be compared with the base library features of the newly increased images so as to judge whether the features to be detected are matched with the base library features of the newly increased images.
In another example, the storing step 180 may further store an image corresponding to the feature to be detected, and when the feature to be detected matches the feature in the base library, the image corresponding to the feature to be detected is stored in the storing step 180, so that the image matching the feature in the base library can be found immediately.
The environment-aware adaptive image recognition method 10 can be applied to face recognition, article recognition and the like.
Based on the same inventive concept, the embodiment of the present disclosure also provides an environment-aware adaptive image recognition apparatus 20.
Fig. 3 shows a schematic diagram of an environment-aware adaptive image recognition apparatus 20 provided by an embodiment of the present disclosure. As shown in fig. 3, an environment-aware adaptive image recognition apparatus 20 includes an acquisition model module 210, a base library presetting module 220, an information acquisition module 230, a scene judgment module 240, a feature extraction module 250, and an information comparison module 260.
The model obtaining module 210 is configured to obtain multiple models, where the multiple models are adapted to different scenes respectively; the bottom library presetting module 220 is used for respectively performing feature extraction on each image in the bottom library through a plurality of models to obtain bottom library features, and respectively storing the bottom library features into a plurality of model bottom libraries corresponding to the plurality of models; the information acquiring module 230 is configured to acquire an image to be detected, where the information acquiring module 230 may acquire the image to be detected stored in the image storage device through the image storage device, or may directly acquire the image to be detected through a video stream camera or other front-end information acquiring device. As a variation, the scene determining module 240 may also be directly installed on the information obtaining module 230, and in the process of obtaining the image to be detected by the information obtaining module 230, the scene type of the image to be detected is determined; and the scene judging module 240 is configured to judge a scene type of the image to be detected, and select one or more models as judgment models according to the scene type. The scene determining module may be a sensor, which determines the type of the scene through the sensor, or may be a time interval setting device, which divides the implementation site into different scene types according to time, and when the implementation site is in the scene type of the corresponding time, the scene determining module 240 may directly make a determination to determine the scene type, and further select one or more models as the determining models according to the scene type; a feature extraction module 250, configured to extract a feature to be detected of the image to be detected through one or more determination models; and the information comparison module 260 is used for comparing the features to be detected with the bottom library features in the model bottom library and judging whether the features are matched with the bottom library features.
In one example, the information comparing module 260 is further configured to: and calling the bottom library characteristics in the model bottom library corresponding to the judgment model according to the judgment model corresponding to the characteristics to be detected for comparison with the characteristics to be detected.
In another example, the information comparing module 260 is further configured to: and comparing the characteristics to be detected with the characteristics of the bottom library in all the model bottom libraries.
In one example, the adaptive image recognition device 20 further includes an alarm module 270, and the alarm module 270 sends an alarm signal when there is at least one matching between the feature of the base library and the feature to be detected according to the matching result.
Fig. 4 shows a schematic diagram of another environment-aware adaptive image recognition apparatus 20 provided by the embodiment of the present disclosure. As shown in fig. 4, the environment-aware adaptive image recognition apparatus 20 further includes a storage module 280 for storing the feature to be detected in a model feature library corresponding to the judgment model.
In another example, the storage module 280 may further store an image corresponding to the feature to be detected, and when the feature to be detected matches the feature in the bottom library, the image corresponding to the feature to be detected is stored through the storage module 280, so that the image matching the feature in the bottom library can be immediately extracted.
Fig. 5 illustrates an electronic device 30 provided by an embodiment of the present disclosure. As shown in fig. 5, an embodiment of the present disclosure provides an electronic device 30, where the electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. The memory 310 is used for storing instructions. A processor 320 for invoking the instructions stored by the memory 310 to perform the method 10 for context-aware adaptive image recognition of the disclosed embodiments. The processor 320 is connected to the memory 310 and the I/O interface 330, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 310 may be used to store programs and data, including the programs for context-aware adaptive image recognition involved in the embodiments of the present disclosure, and the processor 320 executes various functional applications and data processing of the electronic device 30 by executing the programs stored in the memory 310.
In the embodiment of the present disclosure, the processor 320 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and the processor 320 may be one or a combination of several Central Processing Units (CPUs) or other forms of Processing units with data Processing capability and/or instruction execution capability.
Memory 310 in embodiments of the present disclosure may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the disclosed embodiment, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 in embodiments of the present disclosure may include one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present disclosure can be accomplished with standard programming techniques with rule-based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementations of the disclosure has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the disclosure and its practical application to enable one skilled in the art to utilize the disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (13)

1. An environment-aware adaptive image recognition method, wherein the method comprises:
a model obtaining step, namely obtaining a plurality of models, wherein the models are respectively suitable for different scenes;
a bottom library presetting step, namely respectively extracting the characteristics of each image in the bottom library through each model to obtain the characteristics of the bottom library, and respectively storing the characteristics of the bottom library into each model bottom library corresponding to each model to form the characteristics of the bottom library of each image in a corresponding scene;
an information acquisition step, namely acquiring an image to be detected;
a scene judging step, namely judging the scene type of the image to be detected, and selecting one or more models as judging models according to the scene type;
a feature extraction step, namely extracting the to-be-detected features of the to-be-detected image through one or more judgment models;
and information comparison step, comparing the characteristics to be detected with the characteristics of the model base library, and judging whether the characteristics are matched with the characteristics of the model base library.
2. The method of claim 1, wherein,
the model comprises a scene single-dimensional adaptation model or a scene multi-dimensional adaptation model,
the scene single-dimension adaptive model adapts to the scene type divided by a scene distinguishing factor;
the scene multi-dimensional adaptation model adapts to the scene type divided by a plurality of scene differentiating factors.
3. The method of claim 2, wherein,
the scene discrimination factor includes: time, light, density of people stream, weather, region.
4. The method of claim 2 or 3,
and the scene judging step is used for judging the scene type according to the scene distinguishing factor.
5. The method of claim 1, wherein,
in the information comparison step: and the model base library is the model base library corresponding to the judgment model.
6. The method of claim 1, wherein,
in the information comparison step: the model base library is all the model base libraries.
7. The method of claim 5 or 6,
the method further comprises the following steps: and an alarming step, wherein according to the matching result, when at least one bottom library characteristic is matched with the characteristic to be detected, an alarming signal is sent out in the alarming step.
8. The method of claim 1, wherein,
the scene judging step further includes: judging the scene type in real time;
the information acquiring step further includes: acquiring the image to be detected through real-time acquisition; and adjusting acquisition parameters according to the scene type to adapt to the corresponding scene type.
9. The method of claim 1, wherein,
the method further comprises the following steps: and a storage step, storing the characteristics to be detected in a model characteristic library corresponding to the judgment model.
10. The method of claim 9, wherein,
the storing step further comprises: and storing the image to be detected corresponding to the characteristic to be detected.
11. An environment-aware adaptive image recognition apparatus, wherein the environment-aware adaptive image recognition apparatus comprises:
the model obtaining module is used for obtaining a plurality of models, and the models are respectively suitable for different scenes;
the base library presetting module is used for respectively carrying out feature extraction on each image in a base library through a plurality of models to obtain base library features, and respectively storing the base library features into a plurality of model base libraries corresponding to the models to form base library features of each image in corresponding scenes;
the information acquisition module is used for acquiring an image to be detected;
the scene judging module is used for judging the scene type of the image to be detected and selecting one or more models as judging models according to the scene type;
the characteristic extraction module is used for extracting the to-be-detected characteristics of the to-be-detected image through one or more judgment models;
and the information comparison module is used for comparing the features to be detected with the characteristics of the model base library and judging whether the features are matched with the characteristics of the model base library.
12. An electronic device, wherein the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the context-aware adaptive image recognition method of any one of claims 1-10.
13. A computer-readable storage medium, wherein,
the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform the context-aware adaptive image recognition method of any of claims 1-10.
CN201910261185.4A 2019-04-02 2019-04-02 Environment perception self-adaptive image recognition method and device Active CN110059594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910261185.4A CN110059594B (en) 2019-04-02 2019-04-02 Environment perception self-adaptive image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910261185.4A CN110059594B (en) 2019-04-02 2019-04-02 Environment perception self-adaptive image recognition method and device

Publications (2)

Publication Number Publication Date
CN110059594A CN110059594A (en) 2019-07-26
CN110059594B true CN110059594B (en) 2021-10-22

Family

ID=67318219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910261185.4A Active CN110059594B (en) 2019-04-02 2019-04-02 Environment perception self-adaptive image recognition method and device

Country Status (1)

Country Link
CN (1) CN110059594B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347814A (en) * 2019-08-07 2021-02-09 中兴通讯股份有限公司 Passenger flow estimation and display method, system and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916374A (en) * 2010-08-20 2010-12-15 浙江大学 Characteristic selection method based on tracking time prediction
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
CN107169450A (en) * 2017-05-15 2017-09-15 中国科学院遥感与数字地球研究所 The scene classification method and system of a kind of high-resolution remote sensing image
WO2018106524A1 (en) * 2016-12-07 2018-06-14 Alcatel-Lucent Usa Inc. Feature detection in compressive imaging
CN108229498A (en) * 2017-08-30 2018-06-29 黄建龙 A kind of slider of slide fastener recognition methods, device and equipment
CN108764022A (en) * 2018-04-04 2018-11-06 链家网(北京)科技有限公司 A kind of image-recognizing method and system
CN109543628A (en) * 2018-11-27 2019-03-29 北京旷视科技有限公司 A kind of face unlock, bottom library input method, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043953A (en) * 2011-01-27 2011-05-04 北京邮电大学 Real-time-robust pedestrian detection method aiming at specific scene
US20120314031A1 (en) * 2011-06-07 2012-12-13 Microsoft Corporation Invariant features for computer vision
CN106886789A (en) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 A kind of image recognition sorter and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916374A (en) * 2010-08-20 2010-12-15 浙江大学 Characteristic selection method based on tracking time prediction
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
WO2018106524A1 (en) * 2016-12-07 2018-06-14 Alcatel-Lucent Usa Inc. Feature detection in compressive imaging
CN107169450A (en) * 2017-05-15 2017-09-15 中国科学院遥感与数字地球研究所 The scene classification method and system of a kind of high-resolution remote sensing image
CN108229498A (en) * 2017-08-30 2018-06-29 黄建龙 A kind of slider of slide fastener recognition methods, device and equipment
CN108764022A (en) * 2018-04-04 2018-11-06 链家网(北京)科技有限公司 A kind of image-recognizing method and system
CN109543628A (en) * 2018-11-27 2019-03-29 北京旷视科技有限公司 A kind of face unlock, bottom library input method, device and electronic equipment

Also Published As

Publication number Publication date
CN110059594A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN109151501B (en) Video key frame extraction method and device, terminal equipment and storage medium
CN109697434B (en) Behavior recognition method and device and storage medium
CN109710780B (en) Archiving method and device
JP6509275B2 (en) Method and apparatus for updating a background model used for image background subtraction
CN107729809B (en) Method and device for adaptively generating video abstract and readable storage medium thereof
CN109492577B (en) Gesture recognition method and device and electronic equipment
WO2019020049A1 (en) Image retrieval method and apparatus, and electronic device
CN111062974B (en) Method and system for extracting foreground target by removing ghost
CN110427800A (en) Video object acceleration detection method, apparatus, server and storage medium
CN111259783A (en) Video behavior detection method and system, highlight video playback system and storage medium
CN110288015A (en) A kind for the treatment of method and apparatus of portrait retrieval
CN110807767A (en) Target image screening method and target image screening device
CN111291887A (en) Neural network training method, image recognition method, device and electronic equipment
CN112465020A (en) Training data set generation method and device, electronic equipment and storage medium
CN112733629A (en) Abnormal behavior judgment method, device, equipment and storage medium
CN110059594B (en) Environment perception self-adaptive image recognition method and device
CN106781167B (en) Method and device for monitoring motion state of object
CN113313098B (en) Video processing method, device, system and storage medium
CN110321808B (en) Method, apparatus and storage medium for detecting carry-over and stolen object
CN110084157B (en) Data processing method and device for image re-recognition
CN111639517A (en) Face image screening method and device
CN115984671A (en) Model online updating method and device, electronic equipment and readable storage medium
CN115761842A (en) Automatic updating method and device for human face base
CN115049963A (en) Video classification method and device, processor and electronic equipment
CN110929706B (en) Video frequency selecting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant