CN115527045A - Image identification method and device for snow field danger identification - Google Patents

Image identification method and device for snow field danger identification Download PDF

Info

Publication number
CN115527045A
CN115527045A CN202211149596.2A CN202211149596A CN115527045A CN 115527045 A CN115527045 A CN 115527045A CN 202211149596 A CN202211149596 A CN 202211149596A CN 115527045 A CN115527045 A CN 115527045A
Authority
CN
China
Prior art keywords
risk
image
taylor
danger
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211149596.2A
Other languages
Chinese (zh)
Inventor
温建伟
邓迪旻
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202211149596.2A priority Critical patent/CN115527045A/en
Publication of CN115527045A publication Critical patent/CN115527045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image identification method and device for snow field danger identification. Wherein, the method comprises the following steps: acquiring original image data; generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; fusing the Taylor split image unit and the risk parameter factor to obtain a risk indication image; and matching the danger indication images through a danger identification matrix to obtain target danger data. The invention solves the technical problems that in the snow field danger image identification process in the prior art, whether danger information exists is judged only through original image data or relevant parameters of the original image data, and the danger information is often influenced by the characteristics of large and scattered original image data, so that the danger information is not accurately judged or the judgment efficiency is low.

Description

Image identification method and device for snow field danger identification
Technical Field
The invention relates to the field of designated image recognition, in particular to an image identification method and device for snow field danger recognition.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, aiming at the situation that high-precision camera equipment is arranged in a snow field and used for danger identification in images of the snow field, danger model training is usually carried out by utilizing danger points designed in original image data, the model can be a neural network deep learning model, and the model is utilized to achieve the technical effects of rapidly inputting original image feature vectors and outputting dangerous feature vectors, so that dangerous signals and dangerous parameters are judged. However, in the process of identifying dangerous images in a snow field in the prior art, whether dangerous information exists is judged only through original image data or relevant parameters of the original image data, and thus the dangerous information is often influenced by the characteristics of large and scattered original image data, so that the dangerous information is inaccurate to judge or the judgment efficiency is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image identification method and device for snow field danger identification, which are used for at least solving the technical problems that in the snow field danger image identification process in the prior art, whether danger information exists is judged only through original image data or relevant parameters of the original image data, and the danger information is often influenced by the characteristics of large and scattered original image data, so that the danger information is judged inaccurately or the judgment efficiency is low.
According to an aspect of an embodiment of the present invention, there is provided an image recognition method for snow field hazard recognition, including: acquiring original image data; generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; fusing the Taylor split image unit and the danger parameter factor to obtain a danger indication image; and matching the danger indication images through a danger identification matrix to obtain target danger data.
Optionally, the generating a taylor split image unit using a low-order taylor derivative according to the original image data includes: extracting pixel parameters in the original image data; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
Optionally, the fusing the taylor split image unit and the risk parameter factor to obtain a risk indication image includes: generating the risk parameter factor according to a risk standard parameter; the taylor split image elements and the risk parametric factor are fused by s = v tan h (ω g + q), where s is the risk-indicative image dataset, v is the risk parametric factor, and h (ω g + q) is the w, q vector pixel convolution data set of the split image h.
Optionally, after the risk indication image is matched through a risk identification matrix to obtain target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
According to another aspect of the embodiments of the present invention, there is also provided an image recognition apparatus for snow field danger identification, including: the acquisition module is used for acquiring original image data; the splitting module is used for generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; the fusion module is used for fusing the Taylor split image unit and the danger parameter factor to obtain a danger indication image; and the matching module is used for matching the danger indication images through a danger identification matrix to obtain target danger data.
Optionally, the splitting module includes: the extraction unit is used for extracting pixel parameters in the original image data; and the input unit is used for inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
Optionally, the fusion module includes: the generating unit is used for generating the risk parameter factor according to the risk standard parameter; a fusion unit for fusing the Taylor split image unit and the risk parametric factor by s = v tan h (ω g + q), wherein s is the risk indicative image dataset, v is the risk parametric factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h.
Optionally, the apparatus further comprises: and the fitting module is used for fitting the target dangerous data and the original image data to obtain a dangerous image result.
According to another aspect of embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls, when running, an apparatus in which the non-volatile storage medium is located to perform an image discrimination method for snow field hazard recognition.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method of image discrimination for snow field hazard identification.
In the embodiment of the invention, the method comprises the steps of acquiring original image data; generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; fusing the Taylor split image unit and the risk parameter factor to obtain a risk indication image; the danger indication images are matched through the danger identification matrix to obtain the target danger data, and the technical problems that in the snow field danger image identification process in the prior art, whether danger information exists is judged only through original image data or relevant parameters of the original image data, and the danger information is often influenced by the large and scattered characteristics of the original image data, so that the danger information judgment is inaccurate or the judgment efficiency is low are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of an image recognition method for snow field hazard recognition according to an embodiment of the present invention;
fig. 2 is a block diagram of an image recognition apparatus for snow field danger recognition according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the present invention, according to an embodiment of the present invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the invention, there is provided a method embodiment of an image recognition method for snow field hazard identification, it being noted that the steps illustrated in the flowchart of the figures may be carried out in a computer system such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be carried out in an order different than here.
Example one
Fig. 1 is a flowchart of an image recognition method for snow field hazard recognition according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, original image data is acquired.
Specifically, in order to solve the technical problem that in the snow field dangerous image identification process in the prior art, whether dangerous information exists is judged only through original image data or relevant parameters of the original image data, and the dangerous information is often influenced by the characteristics of large and scattered original image data, so that the dangerous information is judged inaccurately or the judgment efficiency is low, firstly, the original image data needs to be acquired through a high-precision multi-lens camera module, and the acquired original image data is stored and transmitted, so that the dangerous data in the snow field image can be acquired and analyzed subsequently.
And step S104, generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data.
Optionally, the generating a taylor split image unit by using a low-order taylor derivative according to the original image data includes: extracting pixel parameters in the original image data; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
Specifically, after the original image data is acquired, in order to further increase the judgment efficiency of the dangerous data, the original image data needs to be split through a low-order taylor derivative, and a split image unit is obtained, where the image unit represents the split content of the split and split original image data, and in addition, when image algorithm analysis is performed on the taylor derivative, a whole image can be decomposed into a polynomial exploded view consistent with the requirement, and the degree of decomposition is determined by the order of the taylor series, because in the process of dangerous data analysis, the taylor series selects the low-order derivative most reasonably, that is, pixel parameters in the original image data are extracted; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
And S106, fusing the Taylor split image unit and the danger parameter factor to obtain a danger indication image.
Optionally, the fusing the taylor split image unit and the risk parameter factor to obtain a risk indication image includes: generating the risk parameter factor according to a risk standard parameter; the taylor split image elements and the risk parametric factor are fused by s = v tan h (ω g + q), where s is the risk-indicative image dataset, v is the risk parametric factor, and h (ω g + q) is the w, q vector pixel convolution data set of the split image h.
Specifically, after the taylor split image unit is obtained, in order to further analyze the risk indication image data, a risk parameter factor needs to be fused with the taylor split image unit, and the taylor split image unit and the risk parameter factor are fused through s = v _ tanh (ω g + q), where s is the risk correction image data set, v is the risk parameter factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h to perform fitting calculation, so as to obtain final risk data.
And S108, matching the danger indication images through a danger identification matrix to obtain target danger data.
Specifically, in order to perform one-to-one correspondence between parameters in the risk indication image data and parameters in the risk identification matrix, the risk identification matrix needs to be constructed for matching, wherein the risk identification matrix can be constructed according to user requirements, or can be a corresponding parameter binary matrix extracted by a large data platform according to the acquisition environment of the original image data, and the risk indication image is matched through the risk identification matrix to obtain the target risk data.
Optionally, after the risk indication image is matched through a risk identification matrix to obtain target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
Specifically, before obtaining the target risk data, in order to mark the target risk data in the original image data in time to achieve the warning and alarm effects of the high-precision image capturing apparatus, after the risk indication image is matched by the risk identification matrix to obtain the target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
Through the embodiment, the technical problems that in the snow field dangerous image identification process in the prior art, whether dangerous information exists is judged only through original image data or relevant parameters of the original image data, and the dangerous information is often influenced by the characteristics of large and scattered original image data, so that the dangerous information is judged inaccurately or the judgment efficiency is low are solved.
Example two
Fig. 2 is a block diagram of an image recognition apparatus for snow field danger recognition according to an embodiment of the present invention, as shown in fig. 2, the apparatus including:
an obtaining module 20, configured to obtain raw image data.
Specifically, in order to solve the technical problem that in the snow field dangerous image identification process in the prior art, whether dangerous information exists is judged only through original image data or relevant parameters of the original image data, and the dangerous information is often influenced by the characteristics of large and scattered original image data, so that the dangerous information is judged inaccurately or the judgment efficiency is low, firstly, the original image data needs to be acquired through a high-precision multi-lens camera module, and the acquired original image data is stored and transmitted, so that the dangerous data in the snow field image can be acquired and analyzed subsequently.
A splitting module 22, configured to generate a taylor split image unit according to the original image data by using a low-order taylor derivative.
Optionally, the splitting module includes: the extraction unit is used for extracting pixel parameters in the original image data; and the input unit is used for inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
Specifically, after the original image data is acquired, in order to further increase the judgment efficiency of the hazardous data, the original image data needs to be split through a low-order taylor derivative, and a split image unit is obtained, where the image unit represents the split content of the split and split original image data, and in addition, because the taylor derivative can decompose a whole image into a polynomial exploded view consistent with requirements when performing image algorithm analysis, and the degree of decomposition is determined by the order of the taylor series, in the process of hazardous data analysis, it is most reasonable to select the low-order derivative by the taylor series, that is, pixel parameters in the original image data are extracted; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
And the fusion module 24 is configured to fuse the taylor split image unit and the risk parameter factor to obtain a risk indication image.
Optionally, the fusion module includes: the generating unit is used for generating the risk parameter factor according to the risk standard parameter; a fusion unit for fusing the Taylor split image unit and the risk parametric factor by s = v tan h (ω g + q), wherein s is the risk indicative image dataset, v is the risk parametric factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h.
Specifically, after the taylor split image unit is obtained, in order to further analyze the risk indication image data, a risk parameter factor needs to be fused with the taylor split image unit, and the taylor split image unit and the risk parameter factor are fused through s = v _ tanh (ω g + q), where s is the risk correction image data set, v is the risk parameter factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h to perform fitting calculation, so as to obtain final risk data.
And the matching module 26 is used for matching the danger indication images through a danger identification matrix to obtain target danger data.
Specifically, in order to perform one-to-one correspondence between parameters in the risk indication image data and parameters in the risk identification matrix, the risk identification matrix needs to be constructed for matching, wherein the risk identification matrix can be constructed according to user requirements, or can be a binary matrix of corresponding parameters extracted by a large data platform according to the acquisition environment of the original image data, and the risk indication image is matched through the risk identification matrix to obtain the target risk data.
Optionally, the apparatus further comprises: and the fitting module is used for fitting the target dangerous data and the original image data to obtain a dangerous image result.
Specifically, before obtaining the target risk data, in order to mark the target risk data in the original image data in time to achieve the warning and alarm effects of the high-precision image capturing apparatus, after the risk indication image is matched by the risk identification matrix to obtain the target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
Through the embodiment, the technical problems that in the snow field dangerous image identification process in the prior art, whether dangerous information exists is judged only through original image data or relevant parameters of the original image data, and the dangerous information is often influenced by the characteristics of large and scattered original image data, so that the dangerous information is judged inaccurately or the judgment efficiency is low are solved.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls an apparatus in which the non-volatile storage medium is located when running to perform an image recognition method for snow field hazard recognition.
Specifically, the method comprises the following steps: acquiring original image data; generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; fusing the Taylor split image unit and the risk parameter factor to obtain a risk indication image; and matching the danger indication images through a danger identification matrix to obtain target danger data. Optionally, the generating a taylor split image unit by using a low-order taylor derivative according to the original image data includes: extracting pixel parameters in the original image data; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image. Optionally, the fusing the taylor split image unit and the risk parameter factor to obtain a risk indication image includes: generating the risk parameter factor according to a risk standard parameter; the taylor split image elements and the risk parametric factor are fused by s = v tan h (ω g + q), where s is the risk-indicative image dataset, v is the risk parametric factor, and h (ω g + q) is the w, q vector pixel convolution data set of the split image h. Optionally, after the risk indication image is matched through a risk identification matrix to obtain target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method of image discrimination for snow field hazard identification.
Specifically, the method includes: acquiring original image data; generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data; fusing the Taylor split image unit and the risk parameter factor to obtain a risk indication image; and matching the danger indication images through a danger identification matrix to obtain target danger data. Optionally, the generating a taylor split image unit using a low-order taylor derivative according to the original image data includes: extracting pixel parameters in the original image data; and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image. Optionally, the fusing the taylor split image unit and the risk parameter factor to obtain a risk indication image includes: generating the risk parameter factor according to a risk standard parameter; the taylor split image unit and the risk parametric factor are fused by s = v tan h (ω g + q), where s is the risk assigned image data set, v is the risk parametric factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h. Optionally, after the risk indication image is matched through a risk identification matrix to obtain target risk data, the method further includes: and fitting the target dangerous data and the original image data to obtain a dangerous image result.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored in the memory 33 for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface used for data transmission between devices, and may also be a hardware insertion interface (for example, a USB interface, a serial port, or the like) used for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiment.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may comprise a Random Access Memory (RAM) and may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing component 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image discrimination method for snow field hazard identification, comprising:
acquiring original image data;
generating a Taylor splitting image unit by utilizing a low-order Taylor derivative according to the original image data;
fusing the Taylor split image unit and the danger parameter factor to obtain a danger indication image;
and matching the danger indication images through a danger identification matrix to obtain target danger data.
2. The method of claim 1, wherein generating Taylor split image units from the raw image data using low order Taylor derivatives comprises:
extracting pixel parameters in the original image data;
and inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
3. The method of claim 1, wherein said fusing said taylor split image elements with risk parametric factors to obtain a risk indication image comprises:
generating the risk parameter factor according to a risk standard parameter;
the taylor split image unit and the risk parametric factor are fused by s = v tan h (ω g + q), where s is the risk assigned image data set, v is the risk parametric factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h.
4. The method of claim 1, wherein after said matching the risk indication images with a risk identification matrix to obtain target risk data, the method further comprises:
and fitting the target dangerous data and the original image data to obtain a dangerous image result.
5. An image recognition apparatus for snow field hazard recognition, comprising:
the acquisition module is used for acquiring original image data;
the splitting module is used for generating a Taylor splitting image unit by using a low-order Taylor derivative according to the original image data;
the fusion module is used for fusing the Taylor split image unit and the danger parameter factor to obtain a danger indication image;
and the matching module is used for matching the danger indication images through a danger identification matrix to obtain target danger data.
6. The apparatus of claim 5, wherein the splitting module comprises:
the extraction unit is used for extracting pixel parameters in the original image data;
and the input unit is used for inputting the pixel parameters into the low-order Taylor derivative standard formula to obtain the Taylor split image unit for judging the dangerous image.
7. The apparatus of claim 5, wherein the fusion module comprises:
the generating unit is used for generating the risk parameter factor according to the risk standard parameter;
a fusion unit for fusing the Taylor split image unit and the risk parametric factor by s = v tan h (ω g + q), wherein s is the risk indicative image dataset, v is the risk parametric factor, and h (ω g + q) is a w, q vector pixel convolution data set of the split image h.
8. The apparatus of claim 5, further comprising:
and the fitting module is used for fitting the target dangerous data and the original image data to obtain a dangerous image result.
9. A non-volatile storage medium, comprising a stored program, wherein the program when executed controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202211149596.2A 2022-09-21 2022-09-21 Image identification method and device for snow field danger identification Pending CN115527045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211149596.2A CN115527045A (en) 2022-09-21 2022-09-21 Image identification method and device for snow field danger identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211149596.2A CN115527045A (en) 2022-09-21 2022-09-21 Image identification method and device for snow field danger identification

Publications (1)

Publication Number Publication Date
CN115527045A true CN115527045A (en) 2022-12-27

Family

ID=84699704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211149596.2A Pending CN115527045A (en) 2022-09-21 2022-09-21 Image identification method and device for snow field danger identification

Country Status (1)

Country Link
CN (1) CN115527045A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630643A (en) * 2023-05-23 2023-08-22 北京拙河科技有限公司 Pixel splitting method and device based on image object boundary recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304208B1 (en) * 2018-02-12 2019-05-28 Avodah Labs, Inc. Automated gesture identification using neural networks
CN112651377A (en) * 2021-01-05 2021-04-13 河北建筑工程学院 Ice and snow movement accident detection method and device and terminal equipment
CN113033687A (en) * 2021-04-02 2021-06-25 西北工业大学 Target detection and identification method under rain and snow weather condition
CN114724246A (en) * 2022-04-11 2022-07-08 中国人民解放军东部战区总医院 Dangerous behavior identification method and device
CN114999092A (en) * 2022-06-10 2022-09-02 北京拙河科技有限公司 Disaster early warning method and device based on multiple forest fire model
CN114998622A (en) * 2022-06-01 2022-09-02 南京航空航天大学 Hyperspectral image feature extraction method based on kernel Taylor decomposition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304208B1 (en) * 2018-02-12 2019-05-28 Avodah Labs, Inc. Automated gesture identification using neural networks
CN112651377A (en) * 2021-01-05 2021-04-13 河北建筑工程学院 Ice and snow movement accident detection method and device and terminal equipment
CN113033687A (en) * 2021-04-02 2021-06-25 西北工业大学 Target detection and identification method under rain and snow weather condition
CN114724246A (en) * 2022-04-11 2022-07-08 中国人民解放军东部战区总医院 Dangerous behavior identification method and device
CN114998622A (en) * 2022-06-01 2022-09-02 南京航空航天大学 Hyperspectral image feature extraction method based on kernel Taylor decomposition
CN114999092A (en) * 2022-06-10 2022-09-02 北京拙河科技有限公司 Disaster early warning method and device based on multiple forest fire model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金永昌;哈图;闫卫生;: "航拍区域建筑图像目标点优化识别仿真" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630643A (en) * 2023-05-23 2023-08-22 北京拙河科技有限公司 Pixel splitting method and device based on image object boundary recognition

Similar Documents

Publication Publication Date Title
CN115623336B (en) Image tracking method and device for hundred million-level camera equipment
CN114999092A (en) Disaster early warning method and device based on multiple forest fire model
CN115527045A (en) Image identification method and device for snow field danger identification
CN115409869B (en) Snow field track analysis method and device based on MAC tracking
CN116595069A (en) Big data-based filtering display method and system
CN115600898A (en) Employee behavior risk analysis method and device based on qualitative and quantitative comprehensive analysis
CN116030501B (en) Method and device for extracting bird detection data
CN116579965B (en) Multi-image fusion method and device
CN115345808B (en) Picture generation method and device based on multi-element information acquisition
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116468883B (en) High-precision image data volume fog recognition method and device
CN115914819B (en) Picture capturing method and device based on orthogonal decomposition algorithm
CN116579964B (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN115187570B (en) Singular traversal retrieval method and device based on DNN deep neural network
CN115700554A (en) Testing method and device for internet big data verification
CN115205313B (en) Picture optimization method and device based on least square algorithm
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN117911870A (en) Emergency safety prediction method based on hundred million-level image acquisition means
CN116757981A (en) Multi-terminal image fusion method and device
CN116431392A (en) Important data separation method and device
CN117871419A (en) Air quality detection method and device based on optical camera holder control
CN116017128A (en) Edge camera auxiliary image construction method and device
CN117351341A (en) Unmanned aerial vehicle fish school identification method and device based on decomposition optimization
CN116485841A (en) Motion rule identification method and device based on multiple wide angles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221227