CN107977615A - A kind of smart home robot system based on instant messaging - Google Patents

A kind of smart home robot system based on instant messaging Download PDF

Info

Publication number
CN107977615A
CN107977615A CN201711193469.1A CN201711193469A CN107977615A CN 107977615 A CN107977615 A CN 107977615A CN 201711193469 A CN201711193469 A CN 201711193469A CN 107977615 A CN107977615 A CN 107977615A
Authority
CN
China
Prior art keywords
image
module
robot
vector
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711193469.1A
Other languages
Chinese (zh)
Inventor
朱治广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Province Ishida Furniture Co Ltd
Original Assignee
Anhui Province Ishida Furniture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Province Ishida Furniture Co Ltd filed Critical Anhui Province Ishida Furniture Co Ltd
Priority to CN201711193469.1A priority Critical patent/CN107977615A/en
Publication of CN107977615A publication Critical patent/CN107977615A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention belongs to smart home field, discloses a kind of smart home robot system based on instant messaging, is provided with 3-D scanning module, picture recognition module, picture editting's module, robot, server and client side;3-D scanning module is electrically connected picture recognition module, and picture recognition module is electrically connected picture editting's module, picture editting's module wireless signal connection robot, and robot receives the signal control of server, and client can send instruction to server.The present invention utilizes dimensional Modeling Technology, generates all furniture appliance location drawing pictures, and mark the facility information of furniture appliance, these information are transferred to robot, client can send control instruction by client remote, be communicated to robot via server, robot is operated according to instruction;User can be in a manner of point-to-point or group communication, the control that personalizes to smart home, so as to lift manipulation ability and interactive experience.

Description

Intelligent household robot system based on instant messaging
Technical Field
The invention belongs to the field of intelligent home furnishing, and particularly relates to an intelligent home furnishing robot system based on instant messaging.
Background
At present, people will enter the smart home era. With the development of the mobile internet and the internet of things, people can interact with the intelligent home equipment everywhere. In recent years, technologies and products of smart home robots have been developed in succession as a module on smart hardware devices or home appliances for controlling the smart home devices. However, the control still remains in the past instruction level, the interactive information is very simple, and the intelligent characteristic is also more embodied in that various sensors such as a temperature sensor, a distance sensor and the like are added to the equipment. In the aspect of combining instant messaging and intelligent household robots, the method is still in a preliminary stage and needs to be developed.
The cognition and understanding of the graph are important bases for the robot to acquire external information and make judgment and reflection. The similarity of the automatic recognition graphs is one of important technologies for improving the visual cognition efficiency of the robot and expanding the intelligent cognition field. The method is widely applied to the fields of industrial technology, graphic image processing, mode recognition and artificial intelligence, and is necessary to develop a set of graphic similarity recognition technology. With the increasing development of computer digitization and graphic technology, the efficiency of digitizing the graphic geometric feature information is also greatly improved. And the support of a reasonable and efficient algorithm and an environment platform also enables the research to have sufficient feasibility.
The existing common pattern similarity recognition method comprises a probability statistical algorithm, a minimum mean square error of characteristic values, a weighted average algorithm of geometric appearance characteristic necessary conditions and the like. Although some efficiency is achieved, there are some disadvantages: the realization process of the algorithm and the matching of visual resolution are not intuitive; the algorithm is complex, so that the data processing capacity is large and the operation cost is high; the average analysis of the algorithm causes the influence of the change of important geometric characteristics in the graph on the overall similarity, and causes certain deviation in accuracy and stability.
In summary, the problems of the prior art are: the control of the household robot is still limited to the control of the past instruction level, the interactive information is very simple, and the intelligent characteristic is also reflected more that various sensors such as a temperature sensor, a distance sensor and the like are added to the equipment. In the aspect of combining instant messaging and the intelligent household robot, the method is still in a preliminary stage and needs to be developed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an intelligent household robot system based on instant messaging.
The intelligent household robot system based on instant messaging is provided with a three-dimensional scanning module, an image recognition module, an image editing module, a robot, a server and a client; the three-dimensional scanning module electric connection image identification module, image identification module electric connection image editing module, image editing module wireless signal connects the robot, the signal control of server is received to the robot, the customer end can send the instruction to the server.
The image identification module is provided with a digital-to-analog converter.
The proportion of the image generated by the three-dimensional scanning module is 1:1.
the image editing module is provided with three-dimensional modeling software and a wireless transmitting device.
The different clients have different function authorities, so that accidents caused by wrong operations of children are prevented.
The identification method of the image identification module comprises the following steps: extracting color features and self-adaptive LBP operator features; constructing a multi-feature bottom rank matrix representation model:
where alpha is a coefficient greater than 0,the method is used for measuring errors caused by noise and outliers;
equivalent to the following model:
then correcting the image and outputting the positioned image;
the specific steps of the algorithm for extracting the characteristics of the adaptive LBP operator are as follows:
(1) Converting an image input into a system into a gray image, summing gray values of pixels of the image { gray v (i, j) }, and acquiring an average value:
(2) Removing the background by using the total texture characteristics, calculating the sum of absolute values of differences between the pixel gray value of the image and the average pixel gray value, and solving the average value:
removing the background by using local texture characteristics, traversing the image by using a sliding window with the size of 3 multiplied by 3, solving the difference between the gray value of a central pixel and the gray value of a peripheral pixel, and solving the average value in each window image:
(3) Fitting a method for calculating an adaptive threshold:
the robot is internally provided with an image processing module, and the processing method of the image processing module comprises the following steps:
eliminating the odd part in the graph; establishing mathematical models of the two graphs, establishing a characteristic matrix corresponding to the graphs by a complete vector group describing the graphs, and calculating an included angle between two adjacent sides; calculating the nearest distance between the two graphs; enhanced processing of the calculation results;
the side length and adjacent angles of the polygon used for the established mathematical model construct a vector S according to the counterclockwise direction 1 Represents a polygon:
S 1 =(l 1 ,α1,l 2 ,α 2 …l N-1N-1 ,l NN );
S 1 there is a one-to-one mapping relationship with the polygon, which indicates that it is independent of the corner initial order;
the complete set of vectors has 2N vectors S in the counterclockwise direction 1 、S 2 ……S 2N-1 、S 2N And the polygons have a one-to-one mapping relation, and a complete vector group of the polygons is formed and expressed as follows:
S 1 =(l 11 ,l 2 ,α 2 …l N-1N-1 ,l NN );
S 2 =(α 1 ,l 2, α 2 …l N-1N-1 ,l NN ,l 1 );
……
S 2N-1 =(l NN ,l 11 ,l 2, α 2 …l N-1N-1 );
S 2N =(α N ,l 11 ,l 2, α 2 …l N-1N-1 ,l N );
by matrix S E Denotes a complete vector and defines S E Is a feature matrix of the polygon, S E Is represented as follows:
the preprocessing of the source graph and the target graph in the graph comprises the following steps:
setting a proper threshold value according to the length-width ratio of the minimum-inclusion rectangle of the graph, and filtering;
setting a threshold value according to the minimum value of the ratio of each side length to the perimeter in the source graph, and removing the odd part in the target graph;
simplifying the number of edges of the target graph to enable the number of edges to be the same as that of the source graph;
the method for acquiring the Euclidean distance and the maximum phase sum coefficient of the most similar vector in the feature matrix of the source graph and the target graph specifically comprises the following steps:
firstly, establishing feature matrixes P of a source graph P and a target graph Q respectively in a counterclockwise direction E And Q E
P E =[P 1 T P 2 T …P 2N-1 T P 2N T ];
Q E =[Q 1 T Q 2 T …Q 2N-1 T Q 2N T ];
The Euclidean distance formula d (x, y) and the cosine formula sim (x, y) of the included angle are as follows:
based on D (x, y) and sim (x, y), redefining the two matrices D and S such that:
finding the minimum value of D and S;
make Eu respectively e =min{D ij },1≤i≤j=2N;Sim e =max{S ij },1≤i≤j=2N;
Then constructing feature matrixes of the graphs P and Q according to the sequence and the direction, repeating the calculation method, and solving the minimum value Eu between the most complete vectors in the two feature matrixes c And Sim c
Finally let Eu = min { Eu } e ,Eu c };
Sim=min{Sim e ,Sim c };
Eu and Sim are Euclidean distance and maximum phase sum coefficient of the most similar vectors corresponding to the two graphs of P and Q.
Further, the method of image correction includes:
collecting N samples as a training set X, and calculating a sample average value m by adopting the following formula:
wherein xi ∈ sample training set X = (X1, X2, \8230;, xN);
finding a scatter matrix S:
finding out the eigenvalue lambdai of the dispersion matrix and the corresponding eigenvector ei, wherein ei is a principal component, and arranging the eigenvalues of lambdai 1, lambdai 2 and lambdai 8230in sequence from large to small;
taking p values, λ 1, λ 2, \8230, λ p determines the measured home space E = (E1, E2, \8230; eP), over which the point in the training sample X at which each element is projected into that space is given by:
x'i=Etxi,t=1,2,…,N;
the p-dimensional vector obtained by subjecting the original vector to PCA dimensionality reduction is obtained by the formula.
Further, the identification method of the image identification module further comprises household image identification, and the identification method comprises the following steps:
detecting the current frame of the home furnishing and sequencing according to the coordinates to obtain the identification result of each home furnishing of the current frame; calculating corresponding n adjacent frame identification results of each household according to the identification results of each household of the current frame; counting the identity of each home, and determining the final identity of a target by more than half n/2 of unified identities;
wherein, the reconstruction error { r1, r2 \8230 \ 8230;. Rn } and r1 between the picture to be identified and each category of the pre-stored home database is calculated<r2<……&lt, rn, the obtained similarity value is according toDetermining a final recognition result; where T1 is the ratio value, T1=0.6.
Further, the enhanced processing of the calculation results comprises:
carrying out deformation on the initial vector for one time to multiple times, adding geometric characteristic values of the graph on the basis of constructing the initial vector by using adjacent corner sequences, and taking adjacent corner ratios in the adding sequence as new initial vectors; carrying out one to multiple times of nonlinear processing on the initial vector, and carrying out evolution processing on the initial vector;
and (3) carrying out similarity calculation on the deformed initial vector for multiple times, and finally carrying out value taking according to weighted average, wherein the evaluation formula of the Euclidean distance Eu and the phase sum coefficient Sim is as follows:
in the above formula, n is the number of vector deformation times, k i As a weight coefficient, eu i And Sim i Is the Euclidean distance of the vector after the ith deformation, eu (P, Q) is the evaluation of the Euclidean distance, n =4,k i Take 0.25.
Further, when the image editing module is wirelessly connected with the robot, the transfer function is as follows:
wherein, ω is 0 For the center frequency of the filter, for different ω 0 K is k/omega 0 Remain unchanged.
The invention has the advantages and positive effects that: the intelligent household robot system based on the instant messaging utilizes a three-dimensional modeling technology to generate position images of all furniture household appliances, marks equipment information of the furniture household appliances, transmits the information to the robot, enables a client to remotely send out a control instruction through a client, transmits the control instruction to the robot through a server, and operates the robot according to the instruction. Based on a convenient instant communication channel, the user can perform anthropomorphic control on the smart home in a point-to-point or group communication mode, so that the control capability and the interactive experience are improved.
According to the image acquisition and processing method, the home position area is obtained by combining an improved LRR model and morphological operation according to the image color and an LBP characteristic operator. The robustness and accuracy of home position detection can be effectively improved, and false detection is reduced. The method for extracting the feature vector of the home furnishing image improves the home furnishing identification degree to a certain extent, and is beneficial to the acquisition and identification of the image.
The contour similarity detection method provided by the invention improves the visual resolution effect of the robot on the similarity of the graphs, and is particularly helpful for the graphs with high similarity which are difficult to be resolved; the graph detection effect has stronger stability and reliability; the detection time is short, the operation is efficient, and the implementation effect cost is low. The invention only queries the edges of the graph, thereby reducing the data processing amount. The invention constructs the feature matrix of the graph, selects a proper judgment criterion, performs multiple times of enhanced nonlinear transformation on the elements of the feature matrix, and establishes a similarity standard by using a weighted average value of a majority value and multiple standards, thereby achieving high efficiency and strong stability of the algorithm.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent home robot system based on instant messaging according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an interaction mode between a client and a smart home provided by an embodiment of the present invention.
In the figure: 1. a three-dimensional scanning module; 2. an image recognition module; 3. an image editing module; 4. a robot; 5. a server; 6. and (4) a client.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The structure of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, an intelligent home robot system based on instant messaging according to an embodiment of the present invention is provided with a three-dimensional scanning module 1, an image recognition module 2, an image editing module 3, a robot 4, a server 5, and a client 6; the three-dimensional scanning module 1 electric connection image identification module 2, image identification module 2 electric connection image editing module 3, 3 wireless signal connection robots 4 of image editing module, robot 4 receives server 5's signal control, client (can be a plurality of) 6 can be to server 5 issue the instruction.
As a preferred embodiment of the present invention, the image recognition module 2 is provided with a digital-to-analog converter.
As a preferred embodiment of the present invention, the three-dimensional scanning module 1 generates an image with a scale of 1:1.
as a preferred embodiment of the present invention, the image editing module 3 is provided with three-dimensional modeling software and a wireless transmitting device.
As the preferred embodiment of the invention, the different clients 6 have different function authorities, so that accidents caused by wrong operations of children are prevented.
The identification method of the image identification module comprises the following steps: extracting color features and self-adaptive LBP operator features; constructing a multi-feature bottom rank matrix representation model:
where alpha is a coefficient greater than 0,the method is used for measuring errors caused by noise and outliers;
equivalent to the following model:
then correcting the image and outputting the positioned image;
the specific steps of the algorithm for extracting the characteristics of the adaptive LBP operator are as follows:
(1) Converting an image input into a system into a gray image, summing gray values of pixels of the image { gray v (i, j) }, and acquiring an average value:
(2) Removing the background by using the total texture characteristics, calculating the sum of absolute values of differences between the pixel gray value of the image and the average pixel gray value, and solving the average value:
removing the background by using local texture characteristics, traversing the image by using a sliding window with the size of 3 multiplied by 3, solving the difference between the gray value of a central pixel and the gray value of a peripheral pixel, and solving the average value in each window image:
(3) Fitting a method for calculating an adaptive threshold:
the robot is internally provided with an image processing module, and the processing method of the image processing module comprises the following steps:
eliminating the odd part in the graph; establishing mathematical models of the two graphs, establishing a characteristic matrix corresponding to the graphs by a complete vector group describing the graphs, and calculating an included angle between two adjacent sides; calculating the nearest distance between the two graphs; enhanced processing of the calculation results;
the side length and adjacent angles of the polygon used for the established mathematical model construct a vector S according to the counterclockwise direction 1 Represents a polygon:
S 1 =(l 1 ,α1,l 2 ,α 2 …l N-1N-1 ,l NN );
S 1 there is a one-to-one mapping relationship with the polygon, which represents independence from the corner initial order;
the complete set of vectors has 2N vectors S in the counterclockwise direction 1 、S 2 ……S 2N-1 、S 2N And the polygons have a one-to-one mapping relation, and a complete vector group of the polygons is formed and expressed as follows:
S 1 =(l 11 ,l 2, α 2 …l N-1N-1 ,l NN );
S 2 =(α 1 ,l 2 ,α 2 …l N-1N-1 ,l NN ,l 1 );
……
S 2N-1 =(l NN ,l 11 ,l 2 ,α 2 …l N-1N-1 );
S 2N =(α N ,l 11 ,l 2 ,α 2 …l N-1N-1 ,l N );
using matrices S E Represents a complete vector and defines S E Is a feature matrix of the polygon, S E Is represented as follows:
the preprocessing of the source graph and the target graph in the graph comprises the following steps:
setting a proper threshold value according to the length-width ratio of the minimum-inclusion rectangle of the graph, and filtering;
setting a threshold value according to the minimum value of the ratio of each side length to the perimeter in the source graph, and removing the singularized part in the target graph;
simplifying the number of edges of the target graph to make the number of edges the same as that of the source graph;
the method for acquiring the Euclidean distance and the maximum phase sum coefficient of the most similar vector in the feature matrix of the source graph and the target graph specifically comprises the following steps:
firstly, establishing feature matrixes P of a source graph P and a target graph Q respectively in a counterclockwise direction E And Q E
P E =[P 1 T P 2 T …P 2N-1 T P 2N T ];
Q E =[Q 1 T Q 2 T …Q 2N-1 T Q 2N T ];
The Euclidean distance formula d (x, y) and the cosine formula sim (x, y) of the included angle are as follows:
on the basis of D (x, y) and sim (x, y), two matrices D and S are redefined such that:
finding the minimum value of D and S;
make Eu respectively e =min{D ij },1≤i≤j=2N;Sim e =max{S ij },1≤i≤j=2N;
Then, constructing feature matrixes of the graphs P and Q according to the sequence and the direction, repeating the calculation method, and solving the most complete feature matrix of the two feature matrixesMinimum value Eu between vector quantities c And Sim c
Finally let Eu = min { Eu } e ,Eu c };
Sim=min{Sim e ,Sim c };
Eu and Sim are Euclidean distance and maximum phase sum coefficient of the most similar vector corresponding to the two graphs P and Q.
The image correction method comprises the following steps:
collecting N samples as a training set X, and calculating a sample average value m by adopting the following formula:
wherein xi is the sample training set X = (X1, X2, \8230;, xN);
finding a scatter matrix S:
finding out the eigenvalue lambdai of the dispersion matrix and the corresponding eigenvector ei, wherein ei is a principal component, and arranging the eigenvalues of lambdai 1, lambdai 2 and lambdai 8230in sequence from large to small;
taking p values, λ 1, λ 2, \8230, λ p determines the measured home space E = (E1, E2, \8230; eP), over which the point in the training sample X at which each element is projected into that space is given by:
x'i=Etxi,t=1,2,…,N;
the p-dimensional vector obtained by subjecting the original vector to PCA dimensionality reduction is obtained by the formula.
Further, the identification method of the image identification module further comprises the step of identifying the image of the home, which comprises the following steps:
detecting the current frame of the home furnishing and sequencing according to the coordinates to obtain the identification result of each home furnishing of the current frame; calculating corresponding n adjacent frame recognition results of each home according to the recognition results of each home of the current frame; counting the identity of each home, and determining the final identity of a target by more than half n/2 of unified identities;
wherein, the reconstruction error { r1, r2 \8230 \ 8230;. Rn } and r1 between the picture to be identified and each category of the pre-stored home database is calculated<r2<……&Rn, calculating the obtained similarity value according toDetermining a final recognition result; where T1 is the ratio value, T1=0.6.
The enhanced processing of the calculation results comprises:
carrying out deformation on the initial vector for one time or a plurality of times, adding the geometric characteristic value of the graph on the basis of constructing the initial vector by using the adjacent corner sequence, and adopting the adjacent corner ratio of the adding sequence as a new initial vector; carrying out one to multiple times of nonlinear processing on the initial vector, and carrying out evolution processing on the initial vector;
and (3) carrying out similarity calculation on the deformed initial vector for multiple times, and finally carrying out value calculation according to weighted average, wherein the evaluation formula of the Euclidean distance Eu and the phase sum coefficient Sim is as follows:
in the above formula, n is the number of vector deformation, k i As a weight coefficient, eu i And Sim i Is the Euclidean distance of the vector after the ith deformation, eu (P, Q) is the evaluation of the Euclidean distance, n =4,k i Take 0.25.
The transfer function when the image editing module is wirelessly connected with the robot is as follows:
wherein, ω is 0 For the center frequency of the filter, for different ω 0 K is k/omega 0 Remain unchanged.
The invention utilizes the three-dimensional modeling technology, the position image of the furniture household appliance in the whole house is generated by the three-dimensional scanning module 1 and is transmitted to the image recognition module 2 in the form of digital signals, the image recognition module 2 converts the digital signals into analog signals and transmits the images to the image editing module 3, the unified combination of the position information and the equipment information is realized by marking the equipment information of the furniture household appliance in the editing module 3, finally, the unified information is transmitted to the robot 4, a client can remotely send out a control instruction through a client 6 and transmit the control instruction to the robot 4 through the server 5, and the robot 4 operates according to the instruction.
The contour similarity detection method provided by the invention improves the visual resolution effect of the robot on the similarity of the graphs, and is particularly helpful for the graphs with high similarity which are difficult to be resolved; the graph detection effect has stronger stability and reliability; the detection time is short, the operation is efficient, and the implementation effect cost is low. The invention only inquires the edges of the graph, thereby reducing the data processing amount. The invention constructs the feature matrix of the graph, selects a proper judgment criterion, performs multiple times of enhanced nonlinear transformation on the elements of the feature matrix, and establishes a similarity standard by using a weighted average value of a majority value and multiple standards, thereby achieving high efficiency and strong stability of the algorithm.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (5)

1. An intelligent home robot system based on instant messaging is characterized in that the intelligent home robot system based on instant messaging is provided with a three-dimensional scanning module, an image recognition module, an image editing module, a robot, a server and a client; the three-dimensional scanning module is electrically connected with the image recognition module, the image recognition module is electrically connected with the image editing module, the image editing module is in wireless signal connection with the robot, the robot receives signal control of the server, and the client can send an instruction to the server;
the image identification module is provided with a digital-to-analog converter;
the proportion of the image generated by the three-dimensional scanning module is 1:1;
the image editing module is provided with three-dimensional modeling software and a wireless transmitting device;
the different clients have different function authorities, so that accidents caused by wrong operation of children are prevented;
the identification method of the image identification module comprises the following steps: extracting color features and self-adaptive LBP operator features; constructing a multi-feature bottom rank matrix representation model:
s.t.X i =X i A i +E i ,i=1,…,K
where alpha is a coefficient greater than 0,the method is used for measuring errors caused by noise and outliers;
equivalent to the following model:
then correcting the image and outputting the positioned image;
the specific steps of the algorithm for extracting the characteristics of the adaptive LBP operator are as follows:
(1) Converting an image input into a system into a grayscale image, summing pixel grayscale values of the image { grayv (i, j) }, and then obtaining an average value:
(2) Removing the background by using the total texture characteristics, calculating the sum of absolute values of differences between the pixel gray value of the image and the average pixel gray value, and solving the average value:
removing the background by using local texture characteristics, traversing the image by using a sliding window with the size of 3 multiplied by 3, solving the difference between the gray value of a central pixel and the gray value of a peripheral pixel, and solving the average value in each window image:
(3) Method for fitting calculation of adaptive thresholds:
the robot is internally provided with an image processing module, and the processing method of the image processing module comprises the following steps:
eliminating the odd part in the graph; establishing mathematical models of the two graphs, establishing a characteristic matrix corresponding to the graphs by a complete vector group describing the graphs, and calculating an included angle between two adjacent sides; calculating the nearest distance between the two graphs; enhanced processing of the calculation results;
the side length and adjacent angle of the polygon for the established mathematical model construct a vector S according to the counterclockwise direction 1 Represents a polygon:
S 1 =(l 1 ,αl,l 2 ,α 2 …l N-1N-1 ,l NN );
S 1 there is a one-to-one mapping relationship with the polygon, which indicates that it is independent of the corner initial order;
the complete set of vectors has 2N vectors S in the counterclockwise direction 1 、S 2 ……S 2N-1 、S 2N And the polygons have a one-to-one mapping relation, and a complete vector group of the polygons is formed and expressed as follows:
S 1 =(l 11 ,l 2 ,α 2 …l N-1N-1 ,l NN );
by matrix S E Denotes a complete vector and defines S E Is a feature matrix of the polygon, S E Is represented as follows:
the preprocessing of the source graph and the target graph in the graph comprises the following steps:
setting a proper threshold value according to the length-width ratio of the minimum-inclusion rectangle of the graph, and filtering;
setting a threshold value according to the minimum value of the ratio of each side length to the perimeter in the source graph, and removing the odd part in the target graph;
simplifying the number of edges of the target graph to make the number of edges the same as that of the source graph;
the method for acquiring the Euclidean distance and the maximum phase sum coefficient of the most similar vector in the feature matrix of the source graph and the target graph specifically comprises the following steps:
firstly, establishing feature matrixes P of a source graph P and a target graph Q respectively in a counterclockwise direction E And Q E
P E =[P 1 T P 2 T …P 2N-1 T P 2N T ];
Q E =[Q 1 T Q 2 T …Q 2N-1 T Q 2N T ];
The Euclidean distance formula d (x, y) and the cosine formula sim (x, y) of the included angle are as follows:
based on D (x, y) and sim (x, y), redefining the two matrices D and S such that:
finding the minimum value of D and S;
make Eu respectively e =min{D ij },1≤i≤j=2N;Sim e =max{S ij },1≤i≤j=2N;
Then constructing feature matrixes of the graphs P and Q according to the sequence and the direction, repeating the calculation method, and solving the minimum value Eu between the most complete vectors in the two feature matrixes c And Sim c
Finally let Eu = min { Eu } e ,Eu c };
Sim=min{Sim e ,Sim c };
Eu and Sim are Euclidean distance and maximum phase sum coefficient of the most similar vectors corresponding to the two graphs of P and Q.
2. The intelligent home robot system based on instant messaging according to claim 1, wherein the image correction method comprises:
collecting N samples to be used as a training set X, and solving the average value m of the samples by adopting the following formula:
wherein xi is the sample training set X = (X1, X2, \8230;, xN);
finding a scatter matrix S:
finding out the eigenvalue lambdai of the dispersion matrix and the corresponding eigenvector ei, wherein ei is a principal component, and arranging the eigenvalues of lambdai 1, lambdai 2 and lambdai 8230in sequence from large to small;
taking p values, λ 1, λ 2, \8230, λ p determines the measured home space E = (E1, E2, \8230;, eP), where the point in the training sample X where each element is projected into the space is given by:
x'i=Etxi,t=1,2,…,N;
the p-dimensional vector obtained by subjecting the original vector to PCA dimensionality reduction is obtained by the formula.
3. The intelligent household robot system based on instant messaging according to claim 1, wherein the recognition method of the image recognition module further comprises household image recognition, comprising:
detecting the current frame of the home furnishing and sequencing according to the coordinates to obtain the identification result of each home furnishing of the current frame; calculating corresponding n adjacent frame identification results of each household according to the identification results of each household of the current frame; counting the identity of each home, and determining the final identity of a target by more than half n/2 of uniform identities;
wherein, the reconstruction error { r1, r2 \8230 \ 8230;. Rn } and r1 between the picture to be identified and each category of the pre-stored home database is calculated<r2<……&Rn, calculating the obtained similarity value according toDetermining a final recognition result; where T1 is the ratio value, T1=0.6.
4. The intelligent home robot system based on instant messaging according to claim 1, wherein the enhanced processing of the calculation results comprises:
carrying out deformation on the initial vector for one time to multiple times, adding geometric characteristic values of the graph on the basis of constructing the initial vector by using adjacent corner sequences, and taking adjacent corner ratios in the adding sequence as new initial vectors; carrying out one to multiple times of nonlinear processing on the initial vector, and carrying out evolution processing on the initial vector;
and (3) carrying out similarity calculation on the deformed initial vector for multiple times, and finally carrying out value taking according to weighted average, wherein the evaluation formula of the Euclidean distance Eu and the phase sum coefficient Sim is as follows:
in the above formula, n is the number of vector deformation times, k i As a weight coefficient, eu i And Sim i Is the Euclidean distance of the vector after the ith deformation, eu (P, Q) is the evaluation of the Euclidean distance, n =4,k i Take 0.25.
5. The intelligent household robot system based on instant messaging according to claim 1, wherein the transfer function when the image editing module is wirelessly connected with the robot is as follows:
wherein, ω is 0 For the center frequency of the filter, for different ω 0 K is k/ω 0 Remain unchanged.
CN201711193469.1A 2017-11-24 2017-11-24 A kind of smart home robot system based on instant messaging Pending CN107977615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711193469.1A CN107977615A (en) 2017-11-24 2017-11-24 A kind of smart home robot system based on instant messaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711193469.1A CN107977615A (en) 2017-11-24 2017-11-24 A kind of smart home robot system based on instant messaging

Publications (1)

Publication Number Publication Date
CN107977615A true CN107977615A (en) 2018-05-01

Family

ID=62011610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711193469.1A Pending CN107977615A (en) 2017-11-24 2017-11-24 A kind of smart home robot system based on instant messaging

Country Status (1)

Country Link
CN (1) CN107977615A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108787836A (en) * 2018-06-14 2018-11-13 淮阴师范学院 A kind of sheet metal component punch forming mechanism of smart home robot and control method
CN108822186A (en) * 2018-07-06 2018-11-16 广东石油化工学院 A kind of molecular biology test extraction mortar and its extracting method
CN109448040A (en) * 2018-10-22 2019-03-08 湖南机电职业技术学院 A kind of machinery production manufacture displaying auxiliary system
CN109509252A (en) * 2018-11-12 2019-03-22 湖南城市学院 A kind of new indoor finishing Intelligentized design method
CN114393583A (en) * 2022-01-28 2022-04-26 北京云迹科技股份有限公司 Method and device for controlling equipment through robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810474A (en) * 2014-02-14 2014-05-21 西安电子科技大学 Car plate detection method based on multiple feature and low rank matrix representation
CN105007207A (en) * 2015-08-14 2015-10-28 北京北信源软件股份有限公司 Intelligent household robot system based on real-time communication
CN105141503A (en) * 2015-08-13 2015-12-09 北京北信源软件股份有限公司 Novel instant messaging intelligent robot
CN105354866A (en) * 2015-10-21 2016-02-24 郑州航空工业管理学院 Polygon contour similarity detection method
CN105469484A (en) * 2015-11-20 2016-04-06 宁波大业产品造型艺术设计有限公司 App intelligent home lock
CN106003035A (en) * 2016-06-17 2016-10-12 小船信息科技(上海)有限公司 Smart home robot system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810474A (en) * 2014-02-14 2014-05-21 西安电子科技大学 Car plate detection method based on multiple feature and low rank matrix representation
CN105141503A (en) * 2015-08-13 2015-12-09 北京北信源软件股份有限公司 Novel instant messaging intelligent robot
CN105007207A (en) * 2015-08-14 2015-10-28 北京北信源软件股份有限公司 Intelligent household robot system based on real-time communication
CN105354866A (en) * 2015-10-21 2016-02-24 郑州航空工业管理学院 Polygon contour similarity detection method
CN105469484A (en) * 2015-11-20 2016-04-06 宁波大业产品造型艺术设计有限公司 App intelligent home lock
CN106003035A (en) * 2016-06-17 2016-10-12 小船信息科技(上海)有限公司 Smart home robot system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108787836A (en) * 2018-06-14 2018-11-13 淮阴师范学院 A kind of sheet metal component punch forming mechanism of smart home robot and control method
CN108822186A (en) * 2018-07-06 2018-11-16 广东石油化工学院 A kind of molecular biology test extraction mortar and its extracting method
CN109448040A (en) * 2018-10-22 2019-03-08 湖南机电职业技术学院 A kind of machinery production manufacture displaying auxiliary system
CN109509252A (en) * 2018-11-12 2019-03-22 湖南城市学院 A kind of new indoor finishing Intelligentized design method
CN114393583A (en) * 2022-01-28 2022-04-26 北京云迹科技股份有限公司 Method and device for controlling equipment through robot
CN114393583B (en) * 2022-01-28 2024-02-20 北京云迹科技股份有限公司 Method and device for controlling equipment through robot

Similar Documents

Publication Publication Date Title
CN107977615A (en) A kind of smart home robot system based on instant messaging
CN108763606B (en) Method and system for automatically extracting house type graphic primitive based on machine vision
CN107292907B (en) Method for positioning following target and following equipment
CN107358629B (en) Indoor mapping and positioning method based on target identification
CN103252778B (en) For estimating robot location&#39;s Apparatus for () and method therefor
CN103049892B (en) Non-local image denoising method based on similar block matrix rank minimization
CN107688856B (en) Indoor robot scene active identification method based on deep reinforcement learning
Suganthan et al. Pattern recognition by homomorphic graph matching using Hopfield neural networks
CN113361542B (en) Local feature extraction method based on deep learning
CN108898063A (en) A kind of human body attitude identification device and method based on full convolutional neural networks
CN109285110A (en) The infrared visible light image registration method and system with transformation are matched based on robust
CN106127125A (en) Distributed DTW human body behavior intension recognizing method based on human body behavior characteristics
CN114693661A (en) Rapid sorting method based on deep learning
CN112862757A (en) Weight evaluation system based on computer vision technology and implementation method
CN115880415A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN107860390A (en) The nonholonomic mobile robot of view-based access control model ROS systems remotely pinpoints auto-navigation method
Favre et al. A plane-based approach for indoor point clouds registration
CN112069979B (en) Real-time action recognition man-machine interaction system
CN116703895B (en) Small sample 3D visual detection method and system based on generation countermeasure network
CN108986181A (en) Image processing method, device and computer readable storage medium based on dot
JP6773825B2 (en) Learning device, learning method, learning program, and object recognition device
Park et al. Depth image correction for intel realsense depth camera
JP3110167B2 (en) Object Recognition Method Using Hierarchical Neural Network
TWI531985B (en) Palm biometric method
JP2022531029A (en) Image recognition method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180501

RJ01 Rejection of invention patent application after publication