CN110659676A - Information processing method, device and storage medium - Google Patents
Information processing method, device and storage medium Download PDFInfo
- Publication number
- CN110659676A CN110659676A CN201910848845.9A CN201910848845A CN110659676A CN 110659676 A CN110659676 A CN 110659676A CN 201910848845 A CN201910848845 A CN 201910848845A CN 110659676 A CN110659676 A CN 110659676A
- Authority
- CN
- China
- Prior art keywords
- state
- target object
- determining
- result
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 35
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims description 66
- 238000000034 method Methods 0.000 claims description 54
- 238000012549 training Methods 0.000 claims description 48
- 238000004590 computer program Methods 0.000 claims description 27
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 241000219109 Citrullus Species 0.000 description 56
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 56
- 238000010586 diagram Methods 0.000 description 5
- 241000219112 Cucumis Species 0.000 description 4
- 235000015510 Cucumis melo subsp melo Nutrition 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010079 rubber tapping Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- FJJCIZWZNKZHII-UHFFFAOYSA-N [4,6-bis(cyanoamino)-1,3,5-triazin-2-yl]cyanamide Chemical compound N#CNC1=NC(NC#N)=NC(NC#N)=N1 FJJCIZWZNKZHII-UHFFFAOYSA-N 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 210000003491 skin Anatomy 0.000 description 2
- 239000002023 wood Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000002615 epidermis Anatomy 0.000 description 1
- 235000011389 fruit/vegetable juice Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/483—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an information processing method, which comprises the following steps: acquiring sound data aiming at a target object, and determining a first state of the target object according to the sound data; acquiring image data of the target object, and determining a second state of the target object according to the image data; and determining a state result of the target object according to the first state and the second state. The invention also discloses an information processing device and a storage medium.
Description
Technical Field
The present invention relates to prediction technologies, and in particular, to an information processing method, apparatus, and computer-readable storage medium.
Background
In daily life, the user generally checks the external form by experience according to the judging method for judging whether the melon and fruit are mature. Taking watermelon as an example, the mature watermelon juice is delicious and sweet and is popular with the majority of the national people, and whether the watermelon is mature or not is judged by the traditional method mainly depending on the color of the watermelon skin, the color of the watermelon flesh, the density of the watermelon and the reverberation of the watermelon during knocking. However, because the varieties of the watermelons are different and the experiences of each person are different, the user still easily makes mistakes when the watermelon ripeness degree is judged by adopting the mode, and the accuracy is low.
Disclosure of Invention
In view of the above, the present invention provides an information processing method, an information processing apparatus, and a computer-readable storage medium.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides an information processing method, which comprises the following steps:
acquiring sound data aiming at a target object, and determining a first state of the target object according to the sound data;
acquiring image data of the target object, and determining a second state of the target object according to the image data;
and determining a state result of the target object according to the first state and the second state.
In the foregoing solution, the sound data for the target object includes: sound data generated by vibration of the target object;
the determining a first state of the target object from the acoustic data includes:
converting sound data generated by the vibration of the target object into an electric signal;
carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal;
and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
In the foregoing solution, the image data of the target object includes: appearance image data of the target object;
the determining a second state of the target object from the image data includes:
and identifying appearance image data of the target object by using a preset image identification model, and determining a second state of the target object.
In the above scheme, the method further comprises: generating the image recognition model; the generating of the image recognition model comprises:
acquiring a training image data set; the training image dataset comprising: at least one training image data and label data corresponding to each training image data;
fine-tuning a preset neural network by using the training image data set;
extracting image features of the training image data, the image features including at least one of: network characteristics of a neural network, red, green and blue RGB histogram characteristics and local directional texture mode LDTP characteristics;
training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
In the foregoing solution, the determining the state result of the target object according to the first state and the second state includes any one of:
selecting a target state from the first state and the second state as the state result according to a preset rule;
determining weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
In the above scheme, the method further comprises:
acquiring state reference parameters for the target object; the state reference parameters comprise: hardness data;
inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state;
correspondingly, the determining the state result of the target object according to the first state and the second state includes:
and determining a state result of the target object according to the first state, the second state and the third state.
In the foregoing solution, the selecting a target state from the first state and the second state according to a preset rule, as the state result, includes:
comparing the first state with the second state to determine the difference between the two states;
judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value;
and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
An embodiment of the present invention provides an information processing apparatus, including: the system comprises a first processing module, a second processing module and a third processing module; wherein,
the first processing module is used for acquiring sound data of a target object and determining a first state of the target object according to the sound data;
the second processing module is used for acquiring image data of the target object and determining a second state of the target object according to the image data;
and the third processing module is used for determining a state result of the target object according to the first state and the second state.
An embodiment of the present invention provides an information processing apparatus, including: a processor and a memory for storing a computer program capable of running on the processor; wherein,
the processor is configured to execute the steps of any one of the information processing methods when the computer program is executed.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the information processing methods described above.
The information processing method, the information processing device and the computer readable storage medium provided by the embodiment of the invention are used for acquiring sound data aiming at a target object and determining a first state of the target object according to the sound data; acquiring image data of the target object, and determining a second state of the target object according to the image data; and determining a state result of the target object according to the first state and the second state. In the embodiment of the invention, the state (such as judging the maturity degree of watermelon) is judged according to the sound data and the image data of the target object, and the state identification accuracy can be improved by combining the two modes.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another information processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an identification apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present invention.
Detailed Description
In various embodiments of the present invention, acoustic data for a target object is acquired, and a first state of the target object is determined from the acoustic data; acquiring image data of the target object, and determining a second state of the target object according to the image data; and determining a state result of the target object according to the first state and the second state.
The present invention will be described in further detail with reference to examples.
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention; the method can be applied to intelligent electronic equipment; as shown in fig. 1, the method includes:
In this embodiment, the sound data for the target object includes: sound data generated by vibration of the target object;
the determining a first state of the target object from the acoustic data includes:
converting sound data generated by the vibration of the target object into an electric signal;
carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal;
and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
Further, the sound data generated by the vibration of the target object may specifically be: and after the preset position of the target object is knocked by preset force, the preset position vibrates to generate sound data.
Here, in order to improve the recognition accuracy, it may be to acquire sound data generated by tapping a specific portion (i.e., the predetermined position) of the target object with the same force (i.e., the predetermined force) to determine the first state of the target object, considering that the sounds may be different when tapping with different forces and/or tapping with different positions.
Here, the preset correspondence relationship between the frequency and the state may be obtained in advance through a large number of experiments. Specifically, for a plurality of sample objects (specifically, a plurality of sample objects of the same kind), knocking a predetermined position of the sample object according to a preset force, collecting sample sound data generated by vibration of the knocked sample object, converting the sample sound data into a sample electric signal, and performing digital processing on the sample electric signal to obtain a sample peak frequency interval of the sample electric signal; the state of the sample object is determined. And analyzing the states of the plurality of sample objects and the sample peak frequency interval corresponding to each sample object to obtain the preset corresponding relation between the frequency and the states.
It should be noted that, the state of the sample object may be examined by a developer according to a certain rule, and the state of the sample object is determined. Here, the state may be of various types. For example: the state characterizing maturity may then comprise: immature, relatively mature, etc. The different recording can be performed by numbers for various states, such as immature recording by number 1, mature recording by number 2, mature recording by number 3, etc.
In this embodiment, the intelligent electronic device may include or be connected to an assembly (such as a manipulator, and the manipulator may set a force standard) capable of knocking the target object with a certain force, so as to generate sound data; the intelligent electronic equipment further comprises or is connected with a module for collecting the sound data.
And 102, acquiring image data of the target object, and determining a second state of the target object according to the image data.
In this embodiment, the image data of the target object includes: appearance image data of the target object. The intelligent electronic device may include or be connected to a module for collecting the image data, such as a camera or other module having a shooting function.
The determining a second state of the target object from the image data includes:
and identifying appearance image data of the target object by using a preset image identification model, and determining a second state of the target object.
In this embodiment, the method further includes: generating the image recognition model; the generating of the image recognition model comprises:
acquiring a training image data set; the training image dataset comprising: at least one training image data and label data corresponding to each training image data;
fine-tuning a preset neural network by using the training image data set;
extracting image features of the training image data, the image features including at least one of: network characteristics of a neural network, red-green-blue (RGB) histogram characteristics, Local Directional Texture Pattern (LDTP) characteristics;
training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
Here, the neural network may employ an Alexnet neural network. The network characteristics of the neural network specifically include an output result output by the neural network according to the training image data. The tag data correspond to the state, and if the state represents the maturity, the tag data is the maturity; if the state represents quality, the tag data is quality, which is not described herein.
And 103, determining a state result of the target object according to the first state and the second state.
In this embodiment, the determining the state result of the target object according to the first state and the second state includes any one of:
selecting a target state from the first state and the second state as the state result according to a preset rule;
determining weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
Specifically, the selecting a target state from the first state and the second state according to a preset rule, as the state result, includes:
comparing the first state with the second state to determine the difference between the two states;
judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value;
and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
The above-mentioned difference threshold value may be preset and saved by a developer.
In this embodiment, the method further includes:
acquiring state reference parameters for the target object; the state reference parameters comprise: hardness data;
inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state;
correspondingly, the determining the state result of the target object according to the first state and the second state includes:
and determining a state result of the target object according to the first state, the second state and the third state.
Here, the hardness data may refer to a pressure value that the skin of the target object can bear. In particular, the intelligent electronic device may comprise or be connected to a pressure monitor.
In this embodiment, the determining the state result of the target object according to the first state, the second state and the third state includes any one of:
selecting a target state from the first state, the second state and the third state as a state result according to a preset rule;
determining weight values corresponding to the first state, the second state and the third state respectively; and performing weighting processing according to the first state, the second state, the third state, the weight value corresponding to the first state, the weight value corresponding to the second state and the weight value corresponding to the third state, and taking a weighting processing result as the state result.
It should be noted that the first state, the second state, and the third state may respectively correspond to different values, that is, different states are represented by different values. Therefore, when the weighting processing is carried out, the weighting processing is carried out by using the numerical value of each state and the weight value corresponding to each state, and the weighting processing result is obtained.
In this embodiment, the selecting a target state from the first state, the second state, and the third state according to a preset rule, as the state result, includes:
comparing the first state, the second state and the third state, and determining the difference value of any two states;
judging whether the difference value of any two states exceeds a difference value threshold value, if the difference value of any two states does not exceed the difference value threshold value, selecting any one state from the first state, the second state and the third state as a target state, and taking the target state as the state result;
and when the situation that the difference value of any two states exceeds a difference value threshold value is determined, determining the corresponding credibility of the first state, the second state and the third state respectively, and selecting a target state as the state result according to the corresponding credibility of the first state, the second state and the third state respectively.
It should be noted that in this embodiment, different states are represented by different values, so that a difference between any two states can be compared, and a difference threshold is compared according to the difference, and the target state is determined according to the comparison result by using the above method, and is used as a state result.
And when the difference value of any two states does not exceed the difference value threshold value and the characterization error range is small, selecting any one state as a target state. When the difference value of any two states exceeds the difference value threshold value, a large error may exist in the representation, a concept of reliability can be introduced at the moment, and the state corresponding to the highest reliability is selected as the target state. Here, the reliability of each state may be preset and stored, and the reliability may be set by a large number of experiments in advance, that is, for the same sample object, the first state, the second state, and the third state of the sample object are compared with the tag (that is, the set state result) by using three methods, that is, the first state of the sample object is determined from the sound data, the second state of the sample object is determined from the image data, and the third state of the sample object is determined from the state reference parameter, and the error rate is increased from low to high, and the reliability is increased from high to low.
For example, the first state of the sample object is determined according to the sound data, the second state of the sample object is determined according to the image data, and the third state of the sample object is determined according to the state reference parameter, and the error rates of the three methods are sequentially from small to large, so that the credibility of the three methods is from high to low, that is, the credibility of the first state, the second state and the third state is from high to low.
In this embodiment, different types of target objects correspond to different types of state results, that is, the determined state results are different for different types of target objects; for example, for melons and fruits, the status results may characterize the degree of maturity; for solid wood furniture, the status result may be indicative of the Quality (Quality) of the furniture (or of the wood of the furniture).
The method provided by the embodiment can be used for judging any object of which the state needs to refer to parameters such as sound, appearance image, appearance hardness and the like aiming at the target object; for example: can be watermelon, Hami melon and other melons.
The following provides an example of detecting the maturity of watermelon to explain the above method, and the information processing method provided by the present embodiment can be used to detect different types of watermelon, and can improve the accuracy of detection relative to the judgment result of the personal experience of the user.
FIG. 2 is a flow chart illustrating another information processing method according to an embodiment of the present invention; as shown in FIG. 2, the method is applied to the identification of the watermelon maturity, and is combined with the method shown in FIG. 1, and the method comprises the following steps:
Here, the sound data for watermelon can be obtained by tapping the watermelon peel to generate a sound and collecting the sound.
And obtaining the image data aiming at the watermelon by taking a picture of the watermelon. Here, the picture may specifically be a color picture.
In particular, the method may be applied to an identification device, which may adopt the structure shown in fig. 3, and may include: the device comprises a sound acquisition module, a sound processing module, an image acquisition module and a result display module. After the watermelon is knocked, the watermelon shakes to generate sound, the sound collection module collects the sound and converts the sound into an electric signal, and the sound processing module analyzes and processes the electric signal to obtain a conclusion and transmits the conclusion to the result display module to display a detection result; the image acquisition module may acquire image data for a watermelon.
Specifically, the sound processing module performs digital processing on the electric signal to obtain a peak frequency interval of the electric signal; and acquiring data partition areas obtained in advance according to a large number of experiments, and inquiring the data partition areas according to the obtained peak frequency area to obtain the corresponding maturity of the watermelon.
And step 203, identifying the image data, and determining the maturity of the watermelon as a second identification result.
Here, step 203 specifically includes: and (3) extracting the features of the image data of the watermelon, and determining the maturity of the watermelon according to the extracted features. Here, the method may specifically include: classifying the watermelon according to the extracted features, finding out the variety of the watermelon, and determining the maturity of the watermelon by using the trained watermelon classifier of the variety.
Specifically, the method further comprises: generating an image recognition model; the image recognition model is used for recognizing the image data. Specifically, the method for generating the image recognition model in the method shown in fig. 1 may be used.
For the image recognition model for the watermelon maturity in this embodiment, the generating the image recognition model includes:
step 031: preprocessing the acquired watermelon image, inputting the watermelon image with a label into an Alexnet network, finely adjusting the Alexnet network by adopting the label to obtain the Alexnet network for extracting characteristics, and taking an output result of the Alexnet network as the extracted network characteristics of the Alexnet network;
step 032: extracting image direction and texture information features by adopting LDTP, and combining the network features, RGB histogram features and LDTP features of the Alexnet network in the step 031 to obtain image features;
step 033: and (3) repeating the watermelon image acquired in the step 001, labeling watermelons with different maturity degrees with different digital labels to form training set data, processing the training set data to obtain a training set characteristic set, and training an SVM classifier by using the set data to obtain an SVM classification model.
And obtaining an image recognition model capable of recognizing the watermelon maturity according to the Alexnet network and the SVM classifier.
The watermelon image can be collected through an image collecting module, and the image recognition model is applied to the image processing module.
In the process of generating the image recognition model, the watermelon ripeness degree recognition model can be obtained by respectively training different types of watermelons
And step 204, determining the maturity degree of the watermelon according to the first identification result and the second identification result.
Here, the ripeness degree of the watermelon can be displayed to the user through the result display module for the user to refer to.
Specifically, in the step 204, the maturity degree of watermelon is determined according to the first recognition result and the second recognition result, which may adopt the same method as that of determining the state result of the target object according to the first state and the second state in the method shown in fig. 1, that is, the first recognition result may be used as the first state, the second recognition result may be used as the second state, and the obtained maturity degree of watermelon is the state result of the target object; and will not be described in detail herein.
In this embodiment, in addition to identifying the ripeness of watermelon by sound data and image data, auxiliary detection may be performed by detecting the hardness of watermelon peel.
Specifically, the method may further include: and reading the hardness data of the watermelon epidermis, and comparing the hardness data with the corresponding relation table of the hardness and the maturity degree so as to determine the maturity degree of the watermelon as a third identification result.
When the third identification result is obtained, the step 204 of determining the maturity degree of watermelon may refer to the first identification result, the second identification result and the third identification result. The specific reference method may adopt the same method as that in the method shown in fig. 1, in which the state result of the target object is determined according to the first state, the second state, and the third state, that is, the first recognition result may be used as the first state, the second recognition result may be used as the second state, the third recognition result may be used as the third state, and the obtained ripeness degree of the watermelon is the state result of the target object, which is not described herein again.
According to the scheme of the embodiment, the maturity of the watermelon is detected based on the image and the data in a bimodal manner, the image of the watermelon and the sound generated by beating the watermelon can be fused to judge whether the watermelon is mature, the two modes are combined to detect the maturity of the watermelon, and the identification accuracy of the maturity of the watermelon is further improved.
Fig. 4 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention; the device can be applied to intelligent electronic equipment; as shown in fig. 4, the apparatus includes: the device comprises a first processing module, a second processing module and a third processing module.
The first processing module is used for acquiring sound data of a target object and determining a first state of the target object according to the sound data;
the second processing module is used for acquiring image data of the target object and determining a second state of the target object according to the image data;
and the third processing module is used for determining a state result of the target object according to the first state and the second state.
Specifically, the sound data for the target object includes: sound data generated by vibration of the target object; the first processing module is used for converting sound data generated by vibration of the target object into an electric signal; carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal; and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
Specifically, the image data of the target object includes: appearance image data of the target object;
the second processing module is configured to identify appearance image data of the target object by using a preset image identification model, and determine a second state of the target object.
In particular, the apparatus further comprises a generation module for generating the image recognition model. Specifically, the generating module is specifically configured to acquire a training image dataset; the training image dataset comprising: at least one training image data and label data corresponding to each training image data; fine-tuning a preset neural network by using the training image data set; extracting image features of the training image data, the image features including at least one of: network characteristics, RGB histogram characteristics, LDTP characteristics of the neural network; training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
Specifically, the third processing module is configured to determine a state result of the target object according to the first state and the second state by using any one of the following methods:
selecting a target state from the first state and the second state as the state result according to a preset rule;
determining weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
In this embodiment, the apparatus further includes: the fourth processing module is used for acquiring state reference parameters aiming at the target object; the state reference parameters comprise: hardness data; inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state;
correspondingly, the third processing module is configured to determine a state result of the target object according to the first state, the second state, and the third state.
Specifically, the third processing module is specifically configured to determine a state result of the target object according to the first state, the second state, and the third state by using any one of the following methods:
selecting a target state from the first state, the second state and the third state as a state result according to a preset rule;
determining weight values corresponding to the first state, the second state and the third state respectively; and performing weighting processing according to the first state, the second state, the third state, the weight value corresponding to the first state, the weight value corresponding to the second state and the weight value corresponding to the third state, and taking a weighting processing result as the state result.
Specifically, the third processing module is configured to compare the first state and the second state, and determine a difference between the two states; judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value; and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
Specifically, the third processing module is configured to compare the first state, the second state, and the third state, and determine a difference between any two states; judging whether the difference value of any two states exceeds a difference value threshold value, determining that the difference value of any two states does not exceed the difference value threshold value, and selecting any one state from the first state, the second state and the third state as the state result; and when the situation that the difference value of any two states exceeds a difference value threshold value is determined, determining the corresponding credibility of the first state, the second state and the third state respectively, and selecting a target state as the state result according to the corresponding credibility of the first state, the second state and the third state respectively.
It should be noted that: in the information processing apparatus provided in the above embodiment, when performing information processing, only the division of each program module is exemplified, and in practical applications, the processing may be distributed to different program modules according to needs, that is, the internal structure of the apparatus may be divided into different program modules to complete all or part of the processing described above. In addition, the information processing apparatus and the information processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 5 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present invention. The apparatus 50 comprises: a processor 501 and a memory 502 for storing computer programs executable on the processor; wherein, the processor 501 is configured to execute, when running the computer program, the following steps: acquiring sound data aiming at a target object, and determining a first state of the target object according to the sound data; acquiring image data of the target object, and determining a second state of the target object according to the image data; and determining a state result of the target object according to the first state and the second state. The sound data for the target object includes: sound data generated by vibration of the target object; image data of the target object, comprising: appearance image data of the target object.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: converting sound data generated by the vibration of the target object into an electric signal; carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal; and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: and identifying appearance image data of the target object by using a preset image identification model, and determining a second state of the target object.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: acquiring a training image data set; the training image dataset comprising: at least one training image data and label data corresponding to each training image data; fine-tuning a preset neural network by using the training image data set; extracting image features of the training image data, the image features including at least one of: network characteristics, RGB histogram characteristics, LDTP characteristics of the neural network; training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: selecting a target state from the first state and the second state as the state result according to a preset rule; or determining the weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: acquiring state reference parameters for the target object; the state reference parameters comprise: hardness data; inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state; correspondingly, the determining the state result of the target object according to the first state and the second state includes: and determining a state result of the target object according to the first state, the second state and the third state.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: selecting a target state from the first state, the second state and the third state as a state result according to a preset rule; or determining weight values corresponding to the first state, the second state and the third state respectively; and performing weighting processing according to the first state, the second state, the third state, the weight value corresponding to the first state, the weight value corresponding to the second state and the weight value corresponding to the third state, and taking a weighting processing result as the state result.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: comparing the first state with the second state to determine the difference between the two states; judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value; and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
In an embodiment, the processor 501 is further configured to execute, when running the computer program, the following: comparing the first state, the second state and the third state, and determining the difference value of any two states; judging whether the difference value of any two states exceeds a difference value threshold value, determining that the difference value of any two states does not exceed the difference value threshold value, and selecting any one state from the first state, the second state and the third state as the state result; and when the situation that the difference value of any two states exceeds a difference value threshold value is determined, determining the corresponding credibility of the first state, the second state and the third state respectively, and selecting a target state as the state result according to the corresponding credibility of the first state, the second state and the third state respectively.
It should be noted that: the information processing apparatus and the information processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
In practical applications, the apparatus 50 may further include: at least one network interface 503. Various components within information handling device 50 are coupled together by bus system 504. It is understood that the bus system 504 is used to enable communications among the components. The bus system 504 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 504 in fig. 5. The number of the processors 501 may be at least one. The network interface 503 is used for communication between the information processing apparatus 50 and other devices in a wired or wireless manner.
The memory 502 in the embodiment of the present invention is used to store various types of data to support the operation of the information processing apparatus 50.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a DiGital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the information processing apparatus 50 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs: acquiring sound data aiming at a target object, and determining a first state of the target object according to the sound data; acquiring image data of the target object, and determining a second state of the target object according to the image data; and determining a state result of the target object according to the first state and the second state. The sound data for the target object includes: sound data generated by vibration of the target object; image data of the target object, comprising: appearance image data of the target object.
In one embodiment, the computer program, when executed by the processor, performs: converting sound data generated by the vibration of the target object into an electric signal; carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal; and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
In one embodiment, the computer program, when executed by the processor, performs: and identifying appearance image data of the target object by using a preset image identification model, and determining a second state of the target object.
In one embodiment, the computer program, when executed by the processor, performs: acquiring a training image data set; the training image dataset comprising: at least one training image data and label data corresponding to each training image data; fine-tuning a preset neural network by using the training image data set; extracting image features of the training image data, the image features including at least one of: network characteristics, RGB histogram characteristics, LDTP characteristics of the neural network; training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
In one embodiment, the computer program, when executed by the processor, performs: selecting a target state from the first state and the second state as the state result according to a preset rule; or determining the weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
In one embodiment, the computer program, when executed by the processor, performs: acquiring state reference parameters for the target object; the state reference parameters comprise: hardness data; inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state; correspondingly, the determining the state result of the target object according to the first state and the second state includes: and determining a state result of the target object according to the first state, the second state and the third state.
In one embodiment, the computer program, when executed by the processor, performs: selecting a target state from the first state, the second state and the third state as a state result according to a preset rule; or determining weight values corresponding to the first state, the second state and the third state respectively; and performing weighting processing according to the first state, the second state, the third state, the weight value corresponding to the first state, the weight value corresponding to the second state and the weight value corresponding to the third state, and taking a weighting processing result as the state result.
In one embodiment, the computer program, when executed by the processor, performs: comparing the first state with the second state to determine the difference between the two states; judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value; and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
In one embodiment, the computer program, when executed by the processor, performs: comparing the first state, the second state and the third state, and determining the difference value of any two states; judging whether the difference value of any two states exceeds a difference value threshold value, determining that the difference value of any two states does not exceed the difference value threshold value, and selecting any one state from the first state, the second state and the third state as the state result; and when the situation that the difference value of any two states exceeds a difference value threshold value is determined, determining the corresponding credibility of the first state, the second state and the third state respectively, and selecting a target state as the state result according to the corresponding credibility of the first state, the second state and the third state respectively.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.
Claims (10)
1. An information processing method, characterized in that the method comprises:
acquiring sound data aiming at a target object, and determining a first state of the target object according to the sound data;
acquiring image data of the target object, and determining a second state of the target object according to the image data;
and determining a state result of the target object according to the first state and the second state.
2. The method of claim 1, wherein the acoustic data for the target object comprises: sound data generated by vibration of the target object;
the determining a first state of the target object from the acoustic data includes:
converting sound data generated by the vibration of the target object into an electric signal;
carrying out digital processing on the electric signal to obtain a peak frequency interval of the electric signal;
and inquiring a preset corresponding relation between the frequency and the state according to the peak frequency interval, and determining a first state corresponding to the peak frequency interval.
3. The method of claim 1, wherein the image data of the target object comprises: appearance image data of the target object;
the determining a second state of the target object from the image data includes:
and identifying appearance image data of the target object by using a preset image identification model, and determining a second state of the target object.
4. The method of claim 3, further comprising: generating the image recognition model; the generating of the image recognition model comprises:
acquiring a training image data set; the training image dataset comprising: at least one training image data and label data corresponding to each training image data;
fine-tuning a preset neural network by using the training image data set;
extracting image features of the training image data, the image features including at least one of: network characteristics of a neural network, red, green and blue RGB histogram characteristics and local directional texture mode LDTP characteristics;
training a classifier according to the image features and the label data corresponding to the image features to obtain a trained classifier; and taking the trimmed neural network and the trained classifier as the image recognition model.
5. The method of claim 1, wherein determining the state of the target object based on the first state and the second state comprises any one of:
selecting a target state from the first state and the second state as the state result according to a preset rule;
determining weight values corresponding to the first state and the second state respectively; and performing weighting processing according to the first state, the second state, the weight value corresponding to the first state and the weight value corresponding to the second state, and taking a weighting processing result as the state result.
6. The method of claim 1, further comprising:
acquiring state reference parameters for the target object; the state reference parameters comprise: hardness data;
inquiring the corresponding relation between the preset state reference parameter and the state result according to the state reference parameter, and determining a third state;
correspondingly, the determining the state result of the target object according to the first state and the second state includes:
and determining a state result of the target object according to the first state, the second state and the third state.
7. The method according to claim 5, wherein said selecting a target state from said first state and said second state as said state result according to a preset rule comprises:
comparing the first state with the second state to determine the difference between the two states;
judging whether the difference value of the two states exceeds a difference value threshold value or not, and selecting any one state from the first state and the second state as the state result if the difference value of the two states does not exceed the difference value threshold value;
and when the difference value of the two states exceeds a difference threshold value, determining the corresponding credibility of the first state and the second state respectively, and selecting a target state from the first state and the second state as the state result according to the corresponding credibility of the first state and the second state respectively.
8. An information processing apparatus characterized in that the apparatus comprises: the system comprises a first processing module, a second processing module and a third processing module; wherein,
the first processing module is used for acquiring sound data of a target object and determining a first state of the target object according to the sound data;
the second processing module is used for acquiring image data of the target object and determining a second state of the target object according to the image data;
and the third processing module is used for determining a state result of the target object according to the first state and the second state.
9. An information processing apparatus characterized in that the apparatus comprises: a processor and a memory for storing a computer program capable of running on the processor; wherein,
the processor is adapted to perform the steps of the method of any one of claims 1 to 7 when running the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910848845.9A CN110659676A (en) | 2019-09-09 | 2019-09-09 | Information processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910848845.9A CN110659676A (en) | 2019-09-09 | 2019-09-09 | Information processing method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110659676A true CN110659676A (en) | 2020-01-07 |
Family
ID=69038024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910848845.9A Pending CN110659676A (en) | 2019-09-09 | 2019-09-09 | Information processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659676A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733588A (en) * | 2020-08-13 | 2021-04-30 | 精英数智科技股份有限公司 | Machine running state detection method and device and electronic equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002304398A (en) * | 2001-02-02 | 2002-10-18 | Masanobu Kujirada | System for providing feeling of season |
CN101413928A (en) * | 2008-11-14 | 2009-04-22 | 江苏大学 | Fowl egg crack rapid on-line nondestructive detection device and method based on acoustic characteristic |
TW201017560A (en) * | 2008-10-28 | 2010-05-01 | Tsint | Fruit classification method using neural network |
CN103487147A (en) * | 2013-09-06 | 2014-01-01 | 闻泰通讯股份有限公司 | System and method for picking fruits through electronic equipment |
CN104867046A (en) * | 2015-05-29 | 2015-08-26 | 广东欧珀移动通信有限公司 | Method and device for selecting article intelligently |
CN106768081A (en) * | 2017-02-28 | 2017-05-31 | 河源弘稼农业科技有限公司 | A kind of method and system for judging fruits and vegetables growth conditions |
CN107014814A (en) * | 2017-05-25 | 2017-08-04 | 河南嘉禾智慧农业科技有限公司 | A kind of fruit maturity automatic recognition system |
CN108520758A (en) * | 2018-03-30 | 2018-09-11 | 清华大学 | A kind of audio visual cross-module state object material search method and system |
CN109002851A (en) * | 2018-07-06 | 2018-12-14 | 东北大学 | It is a kind of based on the fruit classification method of image multiple features fusion and application |
CN109447165A (en) * | 2018-11-02 | 2019-03-08 | 西安财经学院 | A kind of quality of agricultural product state identification method and device |
CN109459499A (en) * | 2018-12-26 | 2019-03-12 | 广东机电职业技术学院 | A kind of ripe degree fast detector of the watermelon based on STM32 and method |
CN109631486A (en) * | 2018-12-18 | 2019-04-16 | 广东美的白色家电技术创新中心有限公司 | A kind of food monitoring method, refrigerator and the device with store function |
CN109655414A (en) * | 2018-11-27 | 2019-04-19 | Oppo广东移动通信有限公司 | Electronic equipment, information-pushing method and Related product |
-
2019
- 2019-09-09 CN CN201910848845.9A patent/CN110659676A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002304398A (en) * | 2001-02-02 | 2002-10-18 | Masanobu Kujirada | System for providing feeling of season |
TW201017560A (en) * | 2008-10-28 | 2010-05-01 | Tsint | Fruit classification method using neural network |
CN101413928A (en) * | 2008-11-14 | 2009-04-22 | 江苏大学 | Fowl egg crack rapid on-line nondestructive detection device and method based on acoustic characteristic |
CN103487147A (en) * | 2013-09-06 | 2014-01-01 | 闻泰通讯股份有限公司 | System and method for picking fruits through electronic equipment |
CN104867046A (en) * | 2015-05-29 | 2015-08-26 | 广东欧珀移动通信有限公司 | Method and device for selecting article intelligently |
CN106768081A (en) * | 2017-02-28 | 2017-05-31 | 河源弘稼农业科技有限公司 | A kind of method and system for judging fruits and vegetables growth conditions |
CN107014814A (en) * | 2017-05-25 | 2017-08-04 | 河南嘉禾智慧农业科技有限公司 | A kind of fruit maturity automatic recognition system |
CN108520758A (en) * | 2018-03-30 | 2018-09-11 | 清华大学 | A kind of audio visual cross-module state object material search method and system |
CN109002851A (en) * | 2018-07-06 | 2018-12-14 | 东北大学 | It is a kind of based on the fruit classification method of image multiple features fusion and application |
CN109447165A (en) * | 2018-11-02 | 2019-03-08 | 西安财经学院 | A kind of quality of agricultural product state identification method and device |
CN109655414A (en) * | 2018-11-27 | 2019-04-19 | Oppo广东移动通信有限公司 | Electronic equipment, information-pushing method and Related product |
CN109631486A (en) * | 2018-12-18 | 2019-04-16 | 广东美的白色家电技术创新中心有限公司 | A kind of food monitoring method, refrigerator and the device with store function |
CN109459499A (en) * | 2018-12-26 | 2019-03-12 | 广东机电职业技术学院 | A kind of ripe degree fast detector of the watermelon based on STM32 and method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733588A (en) * | 2020-08-13 | 2021-04-30 | 精英数智科技股份有限公司 | Machine running state detection method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8254645B2 (en) | Image processing apparatus and method, and program | |
CN105160318B (en) | Lie detecting method based on facial expression and system | |
JP4543423B2 (en) | Method and apparatus for automatic object recognition and collation | |
Wang et al. | Living-skin classification via remote-PPG | |
US8818104B2 (en) | Image processing apparatus and image processing method | |
US7643674B2 (en) | Classification methods, classifier determination methods, classifiers, classifier determination devices, and articles of manufacture | |
EP3298538A1 (en) | Identifying living skin tissue in a video sequence | |
WO2016185004A1 (en) | Identifying living skin tissue in a video sequence | |
CN109190456B (en) | Multi-feature fusion overlook pedestrian detection method based on aggregated channel features and gray level co-occurrence matrix | |
CN107292228A (en) | A kind of method for accelerating face recognition search speed | |
CN114937232A (en) | Wearing detection method, system and equipment for medical waste treatment personnel protective appliance | |
CN106491322A (en) | Blind-man crutch control system and method based on OpenCV image recognitions | |
KR100602576B1 (en) | Face recognition method, and method for searching and displaying character appearance using the same | |
CN110659676A (en) | Information processing method, device and storage medium | |
CN112843425B (en) | Sleeping posture detection method and device based on sleeping pillow, electronic equipment and storage medium | |
US8326457B2 (en) | Apparatus for detecting user and method for detecting user by the same | |
JP5552946B2 (en) | Face image sample collection device, face image sample collection method, program | |
CN112307453A (en) | Personnel management method and system based on face recognition | |
CN114581819B (en) | Video behavior recognition method and system | |
CN115457595A (en) | Method for associating human face with human body, electronic device and storage medium | |
CN115375954A (en) | Chemical experiment solution identification method, device, equipment and readable storage medium | |
CN108875572A (en) | The pedestrian's recognition methods again inhibited based on background | |
JP5800557B2 (en) | Pattern recognition device, pattern recognition method and program | |
CN105989339B (en) | Method and apparatus for detecting target | |
Saat et al. | Development of watermelon ripeness grading system based on colour histogram |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |