CN113570574B - Scene feature detection device, scene feature search device and scene feature search method - Google Patents

Scene feature detection device, scene feature search device and scene feature search method Download PDF

Info

Publication number
CN113570574B
CN113570574B CN202110856454.9A CN202110856454A CN113570574B CN 113570574 B CN113570574 B CN 113570574B CN 202110856454 A CN202110856454 A CN 202110856454A CN 113570574 B CN113570574 B CN 113570574B
Authority
CN
China
Prior art keywords
module
scene
feature
searching
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110856454.9A
Other languages
Chinese (zh)
Other versions
CN113570574A (en
Inventor
夏盛
王颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Elite Systems Technology Co ltd
Original Assignee
Beijing Elite Systems Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Elite Systems Technology Co ltd filed Critical Beijing Elite Systems Technology Co ltd
Priority to CN202110856454.9A priority Critical patent/CN113570574B/en
Publication of CN113570574A publication Critical patent/CN113570574A/en
Application granted granted Critical
Publication of CN113570574B publication Critical patent/CN113570574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention relates to the technical field of scene feature detection and searching based on video images, in particular to a scene feature detection device, a scene feature searching device and a scene feature searching method. A device for detecting scene features, a device for searching and a method for searching are provided, which can detect basic features (abcde five types of information) in a common scene by using one set, so that most of the demands of users can be met, the searching speed under massive video data is greatly improved, and meanwhile, the size of a module NM, a module VM and a module FM can be reduced and positioned in the same electrical equipment (dual-purpose camera), so that all the functions are integrated together, and the cost is greatly reduced.

Description

Scene feature detection device, scene feature search device and scene feature search method
Technical Field
The invention relates to the technical field of scene feature detection and search, in particular to a scene feature detection device, a scene feature search device and a scene feature search method.
Background
With the maturity of camera technology and AI algorithm, video algorithm technology, many kinds of intelligent video cameras are produced.
There are currently two main camera devices: monocular (single imaging device) visible or non-visible light; binocular (multi-mesh) mixes visible and non-visible light. In addition, there are also a series of devices in the form of video analysis servers and the like, which analyze and manage video images.
Whether the camera device or the video analysis server device, after carrying the AI algorithm or the video analysis algorithm, detection alarms of events and targets can be performed. With the development, video structured cameras and video devices that facilitate later searches have emerged: the video structuring is to extract and store key target information in the video in real time, then build files and indexes, and quickly acquire current scenes, pictures and target information by inputting key parameter indexes when the video is to be used in the later period.
In addition, in order to acquire event information of some specific wavelength bands and detect these events, for example, to detect flames or a high-temperature target, a camera of a non-visible spectrum section is present. Such as a common camera that uses infrared or far infrared spectrum for flame detection.
The above products have obvious drawbacks, e.g
A: the non-visible and visible information of the high pixels is not utilized to build the index information of the video structure. Upon the event or object detection involving non-visible light, a significant structured void appears, which appears not only in the non-visible light portion, but also in the visible light portion, and in the non-visible light portion, in common with the scene description capability, especially when the scene representation requirements involved must be fulfilled by the detection of two or more light bands at the same time. The structured extraction features require more pixel details, but the current accurate non-visible imager is very costly, so most manufacturers can only compromise by reducing the non-visible imaging device, so many features are not careful and accurate and not sensitive enough to detect.
B: the abnormal event and the common man-car targets are not synchronously structured, and comprehensive search service cannot be performed. For example, none of the smart camera products provide both a non-visible light based flame and a visible light license plate based video structuring function and a search function.
C: the search tool supporting natural semanteme is not provided, so that the daily operation is complex, and the intention of a searcher cannot be fully embodied. In order to avoid misunderstanding of a search engine, the existing search tool basically adopts a menu selection type input mode, and the method is humanized.
D: too many and too many structured feature quantities, unstable performance, inaccurate features and many search structure errors; for example, many smart cameras currently provide tens of structured features, but most of the structured features are very weak in reliability and occupy a large amount of computing resources, resulting in high and low cost.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a scene feature detection device, a scene feature search device and a scene feature search method, which use large target surface invisible light imaging (with the length or the width being more than 1000 pixels) equipment to generate structural data of flame events, wherein the structural data comprises specific imaging positions of the structural data, and the structural data is a necessary condition of accurate structuring; the structured data of visible light and the structured data of non-visible light are put in storage simultaneously and synchronously; providing the most basic features of the man-vehicle as the structuring requirement, greatly reducing the interference of unreliable features on the search result, improving the practicability and reducing the overall cost; and a special semantic analysis tool is provided, so that the use of a user is facilitated.
To achieve the purpose, the invention adopts the following technical scheme:
the invention provides a scene characteristic detection device, which comprises a module NM, a module VM and a module FM, wherein the module FM is a database, the module NM can collect, analyze and detect non-visible light images and transmit the characteristics to the module FM for structural storage, and the module VM can collect, analyze and detect the visible light images and transmit the characteristics of the occurred events and targets to the module FM for structural storage;
the module NM and the module VM are positioned in the same electrical equipment, and the module NM and the module VM can respectively detect and analyze the non-visible light image and the visible light image at the same time;
the module NM includes a module NMa and a module NMb, where the module NMa is configured to collect images of non-visible light, and the module NMb can use the non-visible light images collected and obtained by the module NMa to perform analysis and detection, extract event or target features, and output these features to the module FM for storage;
the module NMa imaging spectral range includes one or more visible portions having a spectral range not lying between 400-700 nm; the module NMa senses that the number of target surfaces formed by imaging points of the light waves exceeds 1000 pixels in at least one direction and 1 million pixels on the whole target surface;
the module VM comprises a module VMa and a module VMb, the module VM is used for collecting visible light images, and the module VMb can analyze and detect events or target features by utilizing the visible light images collected and obtained by the VMa module, extract the features and output the features to the module FM for storage;
the imaging spectrum range of the module VMa comprises a visible light part positioned at 400-700nm, the number of target surfaces formed by imaging points of the sensing light waves of the module VMa exceeds 1000 pixels in at least one direction, and the number of the target surfaces exceeds 100 ten thousand pixels in the whole target surfaces;
the module FM simultaneously carries out structural storage on the data reported by the module VM and the data reported by the module NM; the structured store refers to records that express the characteristics of each event or object as satisfying the following requirements: including the time of occurrence of the event or object, and the characteristics of different types of events or objects themselves can be described using the same or different parameters, and the parameters describing the characteristics of the event or object are limited, and the number of stored entries is greater than 1, and each characteristic parameter can be retrieved as an index.
Further, the structured records stored in the module FM contain one or more of the following abcde five types of event or object records, and the content of any one type includes all or part of the following corresponding types of content:
a. a feature quantity FMp of a person appearing in the scene, which includes a position and time at which the person appears;
b. a face feature quantity FMf including the position and time of a face appearing in a scene;
c. the vehicle characteristic quantity FMv comprises the position and time of the vehicle appearing in the scene, the visible light color of the vehicle and the license plate number of the vehicle;
d. fire feature FMa, which includes the time and location of flame occurrence in a scene;
e. smoke feature SMa, which contains the time and location of smoke occurrence in a scene.
Further, the sub-module NMa in the module NM images a spectrum containing at least one of the following narrowband bands: the core wavelength of the passing wave band is between 930nm and 970nm, and the half power width of the wave band is not more than 50nm; the core wavelength of the passing wave band is between 200nm and 280nm, and the half power width of the wave band is not more than 100nm.
Further, the module FM may be located separately in a device that does not contain the module NM and the module VM.
The invention also provides a scene feature searching device, which can be electrically connected with the scene feature detecting device, and comprises a feature searching module SM, wherein the module SM can search out specific scene feature targets or events from the module FM and output the specific scene feature targets or events;
the module SM comprises an input sub-module sma, a translation sub-module SMt, a query sub-module SMs and a presentation sub-module SMp;
the sub-module sma provides a text input box SMik allowing the outside to directly input a continuous text description string, which may contain descriptions of a plurality of different feature parameters, the content input in the text input box SMik being analyzed by the sub-module SMt;
the submodule SMt is used for acquiring text string information from an input frame SMik in the submodule SMi, analyzing the text string information by using a sentence segmentation and semantic analysis algorithm program, and extracting and forming an expression combination containing one or more characteristic quantities;
the submodule SMs is configured to obtain an analysis result of the submodule SMt, search the module FM of the database by using the feature values as conditions for searching the module FM of the database, and allow a record that only a part of values of the feature values in the module FM of the database match to be extracted in a searching process;
the submodule SMp is used for providing a user interface and presenting the searched result of the submodule SMs, wherein the presented content comprises the searched recorded text and pictures.
Further, the submodule SMt monitors and senses the character filling state in the SMik in real time, acquires character string information from the text input box SMik in real time, does not need to manually input a specific starting instruction, and automatically starts sentence segmentation and semantic analysis algorithm programs to analyze the sentence segmentation and semantic analysis algorithm programs as long as the SMik contains contents;
SMs and SMp detect the result of sensing SMt in real time and can automatically operate without manually inputting a specific start command.
A method for searching scene features, the method is based on a device for searching scene features, and a sub-module SMp has the following three functions:
D. all the targets or events to be presented are presented in a positive sequence or a reverse sequence sequentially in order of time;
E. based on a manually specified time reference, all targets or events to be presented are presented in descending order or ascending order according to the occurrence time and the difference of the reference time;
all objects or events to be presented are first classified based on a non-temporal feature parameter or a combination of non-temporal feature parameters, and then each classification is presented at a or B above.
The beneficial effects of the invention are as follows: 1. the basic characteristics (abcde four types of information) of a common scene can be detected to meet the demands of users, so that an unreliable multi-characteristic database structure is abandoned, the searching speed under mass data is greatly improved, meanwhile, the size of a module NM, a module VM and a module FM can be reduced and positioned in the same electrical equipment (a dual-purpose camera), all the functions are integrated, and the cost is greatly reduced.
2. Through the sub-module SMs search function and the presentation function of the module SMp, a convenient and fast textually-based search input starting tool is realized, and the search function is provided in a humanized manner.
Detailed Description
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which presently preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment provides a scene feature detection device, the detection device comprises a module NM, a module VM and a module FM, the module FM is a database, the module NM can collect, analyze and detect non-visible light images and transmit features to the module FM for structural storage, and the module VM can collect, analyze and detect visible light images and transmit the features of events and targets to the module FM for structural storage;
the module NM and the module VM are positioned in the same electrical equipment, and can respectively detect and analyze the invisible light image and the visible light image at the same time;
module NM includes module NMa and module NMb, module NMa is configured to collect images of non-visible light, module NMb is capable of analyzing and detecting and extracting event or target features using the non-visible light images collected and obtained by module NMa and outputting these features to module FM for storage.
The module NMa imaging spectral range includes one or more visible portions of spectrum not lying in the 400-700nm range; the module NMa senses that the target surface of the imaged spot of the light wave is greater than 1000 pixels in at least one direction and greater than 1 million pixels across the target surface.
The module VM comprises a module VMa and a module VMb, the module VM is used for collecting visible light images, and the module VMb can utilize the visible light images collected and obtained by the VMa module to analyze and detect events or target features, and output the features to the module FM for storage.
The imaging spectral range of the module VMa comprises a visible light part positioned at 400-700nm, the number of the target surface formed by imaging points of the light waves sensed by the module VMa exceeds 1000 pixels in at least one direction, and the number of the target surface formed by imaging points of the light waves sensed by the module VMa exceeds 100 ten thousand pixels in the whole target surface.
The module FM simultaneously carries out structural storage on the data reported by the module VM and the data reported by the module NM; structured storage refers to the expression of the characteristics of each event or object as records meeting the following requirements: including the time of occurrence of the event or object, and the characteristics of different types of events or objects themselves can be described using the same or different parameters, and the parameters describing the characteristics of the event or object are limited, and the number of stored entries is greater than 1, and each characteristic parameter can be retrieved as an index.
In this embodiment, non-visible light of a plurality of different wavebands is used for time-division imaging, and the imaging mode enables most non-flame objects to be filtered out on at least one waveband, and the actual flame is detected because the imaging of the non-flame objects on a specific spectrum has special morphological characteristics.
Further, the module VM includes a visible light image acquisition module VMa and a visible light image operation analysis module VMb, the module VM is used for visible light image acquisition, the module VMb uses the visible light image acquired by the VMa module to analyze and detect events occurring in the vehicle and extract characteristics of the vehicle, and outputs the characteristics to the module FM.
In this embodiment, when the module VMb detects a person, we first perform image analysis based on activity, and then perform detection of the person and the vehicle based on the activity standard, so as to filter out stationary targets without practical value, reduce the number of detection results, and facilitate improvement of the detection operation speed of the module VMb.
In this embodiment, the technology of non-visible light event and target extraction is performed by using low-cost large-target imaging detection, which enables us to achieve analysis accuracy of video structuring at low cost.
Further, the structured records stored in the module FM contain one or more of the following abcde five types of event or object records, and the content of any one type includes all or part of the following corresponding types of content:
a. a feature quantity FMp of a person appearing in the scene, which includes a position and time at which the person appears;
b. a face feature quantity FMf including the position and time of a face appearing in a scene;
c. the vehicle characteristic quantity FMv comprises the position and time of the vehicle appearing in the scene, the visible light color of the vehicle and the license plate number of the vehicle;
d. fire feature FMa, which includes the time and location of flame occurrence in a scene;
e. smoke feature SMa, which contains the time and location of smoke occurrence in a scene.
Note that each record in the module FM does not necessarily include all feature amounts in the complete scene feature amounts, specifically, for example, a record includes a category of car feature amounts and a category of person feature amounts, but the car feature amounts include only a feature amount of a license plate number, and the person feature amounts include only a time of appearance.
The face feature FMf is a face data network feature obtained by a deep learning method.
In this embodiment, the module NM and the module VM are located in the same electrical device, so as to achieve high-precision target and event detection of non-visible light and visible light in one device at the same time, and form scene feature databases in images under two spectrums in the module FM at the same time.
In this embodiment, we analyze the demands of most users, and find that more than 90% of search requirements can be supported only by the most basic features (abcde five-class information), so as to discard the unreliable multi-feature database structure and greatly improve the search speed under the massive data.
In this embodiment, the sub-module NMa in the module NM images a spectrum containing at least one of the following narrowband bands: the core wavelength of the passing wave band is between 930nm and 970nm, and the half power width of the wave band is not more than 50nm; the core wavelength of the passing wave band is between 200nm and 280nm, and the half power width of the wave band is not more than 100nm.
Generally, in the prior art, ensuring building safety is a fundamental requirement for owners of indoor buildings. This includes the following most basic functional requirements: detecting extremely early fire and alarming in real time; the people and vehicles entering the building area are detected, their characteristics are recorded, and the video of a long period (such as one day, one week, several months and several years) is quickly searched, positioned and listed for entering events at any time later, so that the aim of finding all relevant information within a few seconds is fulfilled. In the prior art, the characteristics are recorded in dozens of structural characteristics, so that the structural characteristic quantity is too much and too much, the data are massive, the performance is unstable, the characteristics are inaccurate, the searching structure is wrong, and a plurality of devices are needed to cooperate: visible light intelligent camera + intelligent server (video structuring server) +non-visible light camera + search platform; in the invention, the user needs can be met by detecting the most basic characteristics (abcde four types of information), so that an unreliable multi-characteristic database structure is abandoned, the searching speed under mass data is greatly improved, the size of a module NM, a module VM and a module FM can be reduced and positioned in the same electrical equipment (a dual-purpose camera), all the functions are integrated, and the cost is greatly reduced.
Further, the module FM may be located in a device that does not include the module NM and the module VM, and the module FM and the module NM or the module VM may be electrically connected to each other to transfer data, and when the detection device is used in a large amount, a large number of modules NM and modules VM are necessarily used, and the module NM and the module VM uniformly store the detected data in the module FM, which improves the utilization rate of the module FM and facilitates uniform management of the data.
Further, one FM may be electrically and data connected to a plurality of different NM and VM modules.
The embodiment also provides a scene feature searching device, which can be electrically connected with a scene feature detecting device, and comprises a feature searching module SM, wherein the module SM can search out specific scene feature targets or events from the module FM and output the specific scene feature targets or events; the module SM comprises an input sub-module sma, a translation sub-module SMt, a query sub-module SMs and a presentation sub-module SMp; the sub-module sma provides a text input box SMik allowing the outside to directly input a continuous text description string, which may contain descriptions of a plurality of different characteristic parameters, the content input in the text input box SMik being analyzed by the sub-module SMt; the sub-module SMt is configured to obtain text string information from an input box SMik in the sub-module SMi, analyze the text string information by using a sentence segmentation and semantic analysis algorithm program, and extract an expression combination containing one or more feature quantities; the sub-module SMs is configured to obtain an analysis result of the sub-module SMt, search the module FM of the database by using the feature values as conditions for searching the module FM of the database, and allow extracting a record in the database in which only a part of values of the feature values match; the sub-module SMp is configured to provide a user interface, and present the results searched by the sub-module SMs, where the presentation content includes the searched recorded text and pictures.
In the sentence segmentation and semantic parsing algorithm program of the submodule SMt, specifically, the input characters are decomposed into "meta words" which cannot be decomposed any more in the order from the back to the front, then the expressions of the possible feature quantities in all the meta words are traversed and combined, and the combinations can be up to several tens of hundreds, each combination corresponds to one search result, and finally, the union of all the results is provided as the search result.
In this embodiment, the sub-module SMt monitors and senses the text filling state in the SMik in real time, acquires text string information from the text input box SMik in real time, and automatically starts the sentence segmentation and semantic analysis algorithm program to analyze the text string information without manually inputting a specific starting instruction as long as the SMik has content; SMs and SMp detect the result of sensing SMt in real time and can automatically operate without manually inputting a specific start command.
Further, the module SM and the module FM can be located in different electrical devices, and data transmission is performed between the module SM and the module FM through an electrical connection interface, for example, the module SM is located in a mobile terminal, and the module FM is located in a data monitoring center, so that the mobile terminal is convenient to use for searching and querying data.
In this embodiment, the sub-module SMp has the following three functions:
F. all the targets or events to be presented are presented in a positive sequence or a reverse sequence sequentially in order of time;
G. based on a manually specified time reference, all targets or events to be presented are presented in descending order or ascending order according to the occurrence time and the difference of the reference time;
all objects or events to be presented are first classified based on a non-temporal feature parameter or a combination of non-temporal feature parameters, and then each classification is presented at a or B above.
Further, the sub-module SMp may be located separately in a device that does not contain other sub-modules (sma, SMt, SMs) of the module SM, so that the rendered portions of the module SM may be rendered independently.
Further, one SM may be electrically and data connected to a plurality of different FMs.
Further, the sub-module SMp can be electrically and data connected to a plurality of different sub-modules SMs.
The foregoing examples have shown only the preferred embodiments of the invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that modifications, improvements and substitutions can be made by those skilled in the art without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (6)

1. A scene feature detection device, characterized in that:
the detection device comprises a module NM, a module VM and a module FM, wherein the module FM is a database, the module NM can collect, analyze and detect non-visible light images and transmit features to the module FM for structural storage, and the module VM can collect, analyze and detect visible light images and transmit the features of the occurred events and targets to the module FM for structural storage;
the module NM and the module VM are positioned in the same electrical equipment, and the module NM and the module VM can respectively detect and analyze the non-visible light image and the visible light image at the same time;
the module NM includes a module NMa and a module NMb, where the module NMa is configured to collect images of non-visible light, and the module NMb can use the non-visible light images collected and obtained by the module NMa to perform analysis and detection, extract event or target features, and output these features to the module FM for storage;
the module NMa imaging spectral range includes one or more visible portions having a spectral range not lying between 400-700 nm; the module NMa senses that the number of target surfaces formed by imaging points of the light waves exceeds 1000 pixels in at least one direction and 1 million pixels on the whole target surface;
the module VM comprises a module VMa and a module VMb, wherein the module VMa is used for collecting visible light images, and the module VMb can analyze and detect events or target characteristics by utilizing the visible light images collected and obtained by the VMa module and output the characteristics to the module FM for storage;
the imaging spectrum range of the module VMa comprises a visible light part positioned at 400-700nm, the number of target surfaces formed by imaging points of the sensing light waves of the module VMa exceeds 1000 pixels in at least one direction, and the number of the target surfaces exceeds 100 ten thousand pixels in the whole target surfaces;
the module FM simultaneously carries out structural storage on the data reported by the module VM and the data reported by the module NM; the structured store refers to records that express the characteristics of each event or object as satisfying the following requirements: the method comprises the steps that the time of occurrence of an event or a target is included, different types of events or targets can be characterized by using the same or different parameters, the parameters for describing the characteristics of the event or the target are limited, the number of stored entries is more than 1, and the characteristic parameters can be searched as indexes;
the structured records stored in the module FM contain one or more of the following abcde five types of events or target records, and the content of any one type includes all or part of the following corresponding types of content:
a. a feature quantity FMp of a person appearing in the scene, which includes a position and time at which the person appears;
b. a face feature quantity FMf including the position and time of a face appearing in a scene;
c. the vehicle characteristic quantity FMv comprises the position and time of the vehicle appearing in the scene, the visible light color of the vehicle and the license plate number of the vehicle;
d. fire feature FMa, which includes the time and location of flame occurrence in a scene;
e. smoke feature SMa, which contains the time and location of smoke occurrence in a scene.
2. The apparatus for scene feature detection as claimed in claim 1, wherein:
the sub-module NMa in the module NM images at least one of the following narrowband bands: the core wavelength of the passing wave band is between 930nm and 970nm, and the width of the half-power window of the wave band is not more than 50nm; the core wavelength of the passing wave band is between 200nm and 280nm, and the width of the half-power window of the wave band is not more than 100nm.
3. A scene feature detection apparatus as claimed in any of claims 1 to 2, wherein:
the module FM may be located separately in a device that does not contain the module NM and the module VM.
4. A device for searching scene features, characterized in that:
the searching device can be electrically connected with the scene feature detecting device in any one of claims 1-3, and comprises a feature searching module SM, wherein the module SM can search out specific scene feature targets or events from the module FM and output the specific scene feature targets or events;
the module SM comprises an input sub-module sma, a translation sub-module SMt, a query sub-module SMs and a presentation sub-module SMp;
the sub-module sma provides a text input box SMik allowing the outside to directly input a continuous text description string, which may contain descriptions of a plurality of different feature parameters, the content input in the text input box SMik being analyzed by the sub-module SMt;
the submodule SMt is used for acquiring text string information from an input frame SMik in the submodule SMi, analyzing the text string information by using a sentence segmentation and semantic analysis algorithm program, and extracting and forming an expression combination containing one or more characteristic quantities;
the submodule SMs is configured to obtain an analysis result of the submodule SMt, search the module FM of the database by using the feature values as conditions for searching the module FM of the database, and allow a record that only a part of values of the feature values in the module FM of the database match to be extracted in a searching process;
the submodule SMp is used for providing a user interface and presenting the searched result of the submodule SMs, wherein the presented content comprises the searched recorded text and pictures.
5. The apparatus for scene feature searching as recited in claim 4, wherein:
the submodule SMt monitors and senses the character filling state in the SMik in real time, acquires character string information from the text input box SMik in real time, does not need to manually input a specific starting instruction, and automatically starts a sentence segmentation and semantic analysis algorithm program to analyze the sentence segmentation and semantic analysis algorithm program as long as the SMik contains content;
SMs and SMp detect the result of sensing SMt in real time and can automatically operate without manually inputting a specific start command.
6. A method of scene feature searching, the method of searching being based on the apparatus of scene feature searching as claimed in claim 4 or 5, characterized in that:
the sub-module SMp has the following three functions:
A. all the targets or events to be presented are presented in a positive sequence or a reverse sequence sequentially in order of time;
B. based on a manually specified time reference, all targets or events to be presented are presented in descending order or ascending order according to the occurrence time and the difference of the reference time;
C. all objects or events to be presented are first classified based on a non-temporal feature parameter or a combination of non-temporal feature parameters, and then each classification is presented at a or B above.
CN202110856454.9A 2021-07-28 2021-07-28 Scene feature detection device, scene feature search device and scene feature search method Active CN113570574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110856454.9A CN113570574B (en) 2021-07-28 2021-07-28 Scene feature detection device, scene feature search device and scene feature search method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110856454.9A CN113570574B (en) 2021-07-28 2021-07-28 Scene feature detection device, scene feature search device and scene feature search method

Publications (2)

Publication Number Publication Date
CN113570574A CN113570574A (en) 2021-10-29
CN113570574B true CN113570574B (en) 2023-12-01

Family

ID=78168441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110856454.9A Active CN113570574B (en) 2021-07-28 2021-07-28 Scene feature detection device, scene feature search device and scene feature search method

Country Status (1)

Country Link
CN (1) CN113570574B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848377A (en) * 2010-05-26 2010-09-29 苏州安杰瑞电子科技发展有限公司 Device and method for intelligent linkage of multi-video recording device based on cloud computing and mass video searching
CN102999640A (en) * 2013-01-09 2013-03-27 公安部第三研究所 Video and image retrieval system and method based on semantic reasoning and structural description
US8890953B1 (en) * 2011-06-27 2014-11-18 Rawles Llc Optical-based scene detection and audio extraction
CN107067043A (en) * 2017-05-25 2017-08-18 哈尔滨工业大学 A kind of diseases and pests of agronomic crop detection method
CN109753925A (en) * 2018-12-29 2019-05-14 深圳三人行在线科技有限公司 A kind of method and apparatus that iris feature extracts
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
CN110567964A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 method and device for detecting defects of power transformation equipment and storage medium
CN110987189A (en) * 2019-11-21 2020-04-10 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN111460186A (en) * 2020-03-31 2020-07-28 河北工业大学 Method for establishing database containing vehicle visible light images and infrared images
CN113032597A (en) * 2021-03-31 2021-06-25 广东电网有限责任公司 Power transmission equipment classification method and system based on image processing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10044946B2 (en) * 2009-06-03 2018-08-07 Flir Systems Ab Facilitating analysis and interpretation of associated visible light and infrared (IR) image information
KR102374116B1 (en) * 2015-09-30 2022-03-11 삼성전자주식회사 Electronic device
JP6951753B2 (en) * 2018-03-27 2021-10-20 エバ・ジャパン 株式会社 Information search system and program
WO2020174623A1 (en) * 2019-02-27 2020-09-03 オリンパス株式会社 Information processing device, mobile body and learning device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848377A (en) * 2010-05-26 2010-09-29 苏州安杰瑞电子科技发展有限公司 Device and method for intelligent linkage of multi-video recording device based on cloud computing and mass video searching
US8890953B1 (en) * 2011-06-27 2014-11-18 Rawles Llc Optical-based scene detection and audio extraction
CN102999640A (en) * 2013-01-09 2013-03-27 公安部第三研究所 Video and image retrieval system and method based on semantic reasoning and structural description
CN107067043A (en) * 2017-05-25 2017-08-18 哈尔滨工业大学 A kind of diseases and pests of agronomic crop detection method
CN109753925A (en) * 2018-12-29 2019-05-14 深圳三人行在线科技有限公司 A kind of method and apparatus that iris feature extracts
CN110567964A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 method and device for detecting defects of power transformation equipment and storage medium
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
CN110987189A (en) * 2019-11-21 2020-04-10 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN111460186A (en) * 2020-03-31 2020-07-28 河北工业大学 Method for establishing database containing vehicle visible light images and infrared images
CN113032597A (en) * 2021-03-31 2021-06-25 广东电网有限责任公司 Power transmission equipment classification method and system based on image processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Liang Zhao,Ying Wang.Multi-Source Fusion Image Semantic Segmentation Model of Generative Adversarial Networks Based on FCN.IEEE Access.2021,全文. *
基于红外热像仪的电力系统在线监测研究;贡梓童;中国优秀硕士学位论文全文数据库-信息科技辑;全文 *

Also Published As

Publication number Publication date
CN113570574A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
US20140214885A1 (en) Apparatus and method for generating evidence video
CN110334111B (en) Multidimensional track analysis method and device
CN101867793A (en) Distribution type intelligent video searching system and using method
Harkat et al. Fire segmentation using a DeepLabv3+ architecture
US8117528B2 (en) Information handling
CN110196892B (en) Comprehensive protective land monitoring platform based on Internet of things and method thereof
KR100471927B1 (en) System for searching image data being based on web and method thereof
CN111222373A (en) Personnel behavior analysis method and device and electronic equipment
CN106603999A (en) Video monitoring alarming method and system
CN112348003A (en) Airplane refueling scene recognition method and system based on deep convolutional neural network
CN201699880U (en) Distributed intelligent video searching system
US20140032520A1 (en) Image retrieval method and system for community website page
CN116665096A (en) Intelligent gas station safety supervision method based on deep learning visual algorithm
CN113570574B (en) Scene feature detection device, scene feature search device and scene feature search method
CN101589387B (en) Information handling
CN111435435A (en) Method, device, server and system for identifying pedestrians
CN112686226A (en) Big data management method and device based on gridding management and electronic equipment
CN113015171A (en) System with network public opinion monitoring and analyzing functions
US11244185B2 (en) Image search device, image search system, and image search method
CN112149586A (en) Automatic video clip extraction system and method based on neural network
CN107991279A (en) Spectrum compares decision maker, method and authenticity of medicament decision-making system
CN110442671A (en) A kind of method and system of unstructured data processing
CN108491888A (en) Environmental monitoring high-spectral data spectral coverage selection method based on morphological analysis
US20180239782A1 (en) Image processing system, image processing method, and program storage medium
CN112732646A (en) File searching method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant