CN115607978A - Control method and device of intelligent interaction equipment and intelligent interaction panel - Google Patents

Control method and device of intelligent interaction equipment and intelligent interaction panel Download PDF

Info

Publication number
CN115607978A
CN115607978A CN202110809229.XA CN202110809229A CN115607978A CN 115607978 A CN115607978 A CN 115607978A CN 202110809229 A CN202110809229 A CN 202110809229A CN 115607978 A CN115607978 A CN 115607978A
Authority
CN
China
Prior art keywords
interaction
image
information
interactive
interactive object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110809229.XA
Other languages
Chinese (zh)
Inventor
林建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202110809229.XA priority Critical patent/CN115607978A/en
Publication of CN115607978A publication Critical patent/CN115607978A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/04Building blocks, strips, or similar building parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/24Electric games; Games using electronic circuits not otherwise provided for
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/24Electric games; Games using electronic circuits not otherwise provided for
    • A63F2009/2401Detail of input, input devices
    • A63F2009/2402Input by manual operation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/24Electric games; Games using electronic circuits not otherwise provided for
    • A63F2009/2401Detail of input, input devices
    • A63F2009/243Detail of input, input devices with other kinds of input
    • A63F2009/2432Detail of input, input devices with other kinds of input actuated by a sound, e.g. using a microphone

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a control method and device of intelligent interaction equipment and an intelligent interaction panel. Wherein, the method comprises the following steps: displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object; receiving a completion instruction, and acquiring a first image of the interactive object; extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy of the interaction subject for executing the preset operation. The invention solves the technical problem of single playing method caused by lack of interaction of building blocks in the prior art.

Description

Control method and device of intelligent interaction equipment and intelligent interaction panel
Technical Field
The invention relates to the technical field of internet, in particular to a control method and device of intelligent interaction equipment and an intelligent interaction panel.
Background
The building block toy for children is an important component in early education of children, can effectively help children develop intelligence, trains the hand-eye coordination ability of children, is beneficial to the imagination of children, comprehensively utilizes various building blocks of different types to build a real object together, and is convenient for the cultivation of imagination and creativity.
However, the existing building block toy for children lacks interaction, needs to be built by the children through self search or under the assistance of the adult, and is difficult to effectively cultivate the cognitive ability of the children on shape, color and size, and the relative thinking ability of the children such as hand-eye coordination, attention cognition and the like.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a control method and device of intelligent interaction equipment and an intelligent interaction panel, and at least solves the technical problem of single playing method caused by lack of interaction of building blocks in the prior art.
According to an aspect of an embodiment of the present invention, a method for controlling an intelligent interactive device is provided, including: displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object; receiving a finishing instruction, and collecting a first image of an interactive object; extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy of the interaction subject for executing the preset operation.
According to another aspect of the embodiments of the present invention, there is also provided a control apparatus for an intelligent interactive device, including: the display module is used for displaying interaction prompt information, wherein the interaction prompt information is used for indicating the interaction subject to complete preset operation on the interaction object; the receiving module is used for receiving the finishing instruction and acquiring a first image of the interactive object; the determining module is used for extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy of the preset operation executed by the interaction subject.
According to another aspect of embodiments of the present invention, there is also provided a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided an intelligent interactive tablet, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform any of the method steps described above.
In the embodiment of the invention, the control method of the intelligent interaction equipment comprises the following steps: displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object; receiving a completion instruction, and acquiring a first image of an interactive object; extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy of the interaction subject for executing the preset operation. According to the scheme, the intelligent interaction equipment displays the interaction prompt information for the interaction main body in the modes of voice, images, animations, pictures and the like, when the interaction main body completes the preset operation according to the interaction prompt information, the intelligent interaction equipment acquires the preset operation video of the interaction main body to the interaction object in real time, and evaluates the accuracy of the interaction main body to execute the preset operation in real time according to the evaluation parameters, so that the technical problem that the playing method is single due to the fact that building blocks lack interaction in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a control method of an intelligent interactive device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first alternative building block interaction according to an embodiment of the invention;
FIG. 3a is a schematic diagram of a second alternative building block interaction according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a third alternative building block interaction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a fourth alternative building block interaction in accordance with embodiments of the present invention;
FIG. 5 is a schematic diagram of a fifth alternative building block interaction according to embodiments of the invention;
FIG. 6 is a schematic diagram of a second image of an interactive object according to an embodiment of the present invention;
FIG. 7a is a schematic illustration of a first portion of the interactive objects obtained in accordance with FIG. 6;
FIG. 7b is a schematic illustration of a second portion of the interactive objects obtained in accordance with FIG. 6;
FIG. 7c is a schematic illustration of a third portion of the interactive objects obtained in accordance with FIG. 6;
FIG. 7d is a schematic diagram of a fourth part of the interactive object obtained according to FIG. 6;
FIG. 8a is a schematic view of the erosion expansion of an interactive object according to FIG. 7 c;
FIG. 8b is a schematic outline view of an interactive object according to FIG. 8 a;
fig. 9 is a schematic diagram of a camera shooting range of a camera shooting device of an intelligent interaction device according to an embodiment of the invention;
FIG. 10 is a flow diagram of another intelligent interaction device, according to an embodiment of the invention;
FIG. 11 is a schematic diagram of a control apparatus of an intelligent interactive device according to an embodiment of the present invention;
fig. 12 is a schematic diagram of an intelligent interactive tablet according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for controlling an intelligent interactive device, it is noted that the steps illustrated in the flowchart of the attached drawings may be implemented in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flowchart of a control method of an intelligent interactive device according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
and S102, displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object.
The intelligent interaction device can be an intelligent terminal device with a camera device, such as a tablet computer, a mobile phone, an intelligent interaction tablet and the like. The interactive objects can be intelligent toys such as building blocks, jigsaw puzzle and the like. The interactive prompt information can be displayed to the interactive main body through a screen of the intelligent interactive device, and can be played in a voice broadcast mode, and the content of the interactive prompt information can comprise texts, pictures and animations or fusion of a plurality of contents. The interaction subject may be a user operating an interaction object, for example, a child with a small age.
In an optional implementation manner, in a classroom or a home scenario, the intelligent interaction device is a tablet computer, the interaction object may be a set of building blocks, and the interaction prompt information may be used to prompt the child to continue certain operations on the set of building blocks, for example: the interactive prompt message may be: please find out all the building blocks with triangular shapes; the method can also be as follows: please find out the building blocks with X color; the method can also be as follows: please a child to build the building blocks into the shape of the figure, etc.
In a further optional implementation manner, the interaction prompt information may be consecutive, that is, after the user performs an operation according to the current interaction prompt, the next prompt information is given based on an operation result of the user. For example: after the user finds all the triangular building blocks with the shapes of triangles according to the current interactive prompt information, the next interactive prompt information is given according to the operation result of the current user, and the user finds the triangular building blocks with the colors of red in the found triangles.
It should be noted that the interaction prompt information is displayed in an interaction interface of the intelligent interaction device, and the interaction interface may be provided by a specific application installed on the intelligent interaction device, or may be provided by a specific web page accessed by the intelligent interaction device.
And step S104, receiving a finishing instruction, and acquiring a first image of the interactive object.
The completion instruction can be sent out by operating the interactive main body on the intelligent interactive device, for example, a control such as "game completed" is arranged on a screen of the intelligent interactive device, and the interactive main body can finish the game by touching the control and send the completion instruction to the intelligent interactive device. The completion instruction may also be obtained by recognizing a voice instruction, for example, the user may send a voice instruction such as "i have completed a game" to the intelligent interaction device, and the intelligent interaction device receives the completion instruction by recognizing the voice and collects the first image of the interaction object. The completion instruction may also be determined according to a time preset on the intelligent interaction device, for example, an operation time of "10 minutes" is preset on the intelligent interaction device, and when the operation time is over, it is determined that the intelligent interaction device receives the completion instruction, and collects the first image of the interaction object.
And step S106, extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information.
The evaluation parameters are used for representing the accuracy degree of the preset operation executed by the interaction main body.
The operation result information of the preset operation is used for indicating information which is extracted from the first image and can represent the operation result, and the operation result information is matched with the interaction prompt information. For example, the interaction prompting message is used to prompt an operation of "please find all the building blocks with triangular shapes" and the extracted operation result information is an image from the first image, from which all the building blocks with triangular shapes can be extracted.
The above evaluation parameter may be represented by a plurality of grades to indicate the accuracy degree of the preset operation performed by the interaction subject, or may be represented by a score, where the higher the grade or score is, the higher the accuracy degree of the preset operation performed by the interaction subject is indicated.
It should be noted that, the standard operation result information corresponding to the interaction prompt information is preset in the intelligent interaction device, and the standard operation result information indicates information that the interaction subject completes a correct preset operation result on the interaction object according to the interaction prompt information, and is used for comparing with the operation result information extracted from the first image of the interaction object to determine an evaluation parameter between the operation result information of the preset operation and the standard operation result information.
In an alternative embodiment, the interactive object is a building block, the operation result information of the preset operation may include operation result information of all triangle building blocks placed at a specified position, black in color, and the same size, and the evaluation parameter is represented by a score. Specifically, the evaluation parameter of the interaction subject for executing the preset operation can be determined by comparing the operation result information with the standard operation result information. When the score of the evaluation parameter is 100 minutes, the interaction main body correctly completes the preset operation on the interaction object according to the interaction prompt information; when the score of the evaluation parameter is within 0-100, the interaction main body is shown to finish partial preset operation on the interaction object according to the interaction prompt information; and when the score of the evaluation parameter is 0, the interactive main body is shown not to complete any preset operation on the interactive object according to the interactive prompt information.
As can be seen from the above, in the above embodiments of the present application, the method for controlling an intelligent interactive device includes: displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object; receiving a completion instruction, and acquiring a first image of an interactive object; extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy degree of the preset operation executed by the interaction subject. In the scheme, the intelligent interaction equipment displays the interaction prompt information to the interaction main body in a voice mode, an image mode, an animation mode, a picture mode and the like, when the interaction main body completes the preset operation according to the interaction prompt information, the intelligent interaction equipment acquires the first image containing the operation result of the interaction main body on the interaction object in real time, and determines the evaluation parameter for expressing the operation accuracy according to the operation result information in the first image, so that the interaction mode of building block interaction is enriched, the interest of a user on the building block can be promoted, and the technical problem that the playing method is single due to the fact that the building block is lack of interaction in the prior art is solved.
As an optional embodiment, the interactive object is a building block, and the interactive prompt information includes at least one of the following types: interactive prompt information for distinguishing the shapes of the building blocks; interactive prompt information for distinguishing the sizes of the building blocks; and interactive prompt information for distinguishing the colors of the building blocks.
Fig. 2 is a schematic diagram of a first alternative building block interaction according to an embodiment of the present invention, and in an alternative implementation, the shape recognition ability of the child may be developed through an interaction hint for distinguishing the shape of the building block. As shown in fig. 2, all rectangular blocks in the blocks can be found through the interaction prompt message displayed on the intelligent interaction device, so as to guide the child to accurately find the blocks with specific shapes from the blocks with different shapes.
Fig. 3a is a schematic diagram of a second alternative implementation of block interaction according to the embodiment of the present invention, and in an alternative implementation, the size awareness of the child may be trained through an interaction hint for distinguishing the block size. Referring to fig. 3a, the building blocks shown on the screen can be found through the interaction prompt message "please find the building blocks shown on the screen" displayed on the intelligent interaction device, so as to guide the child to accurately find the images with the sizes consistent with the building blocks shown on the screen in the building blocks with different sizes.
Fig. 3b is a schematic diagram of a third alternative building block interaction according to an embodiment of the present invention, and in another alternative implementation, as shown in fig. 3b, a child may be guided to distinguish sizes by "please find out a small triangle in the building block to be placed on the left and a large triangle to be placed on the right" through an interaction message displayed on the intelligent interaction device.
Fig. 4 is a schematic diagram of a fourth alternative implementation of block interaction according to the embodiment of the present invention, and in an alternative implementation, the color cognitive ability of a child may be developed through interaction hints for distinguishing block colors. As shown in fig. 4, by using the interaction prompt information displayed on the intelligent interaction device, "please find all black blocks in the blocks", the children are guided to accurately find all blocks with the same color in the blocks with different colors.
Fig. 5 is a schematic diagram of a fifth alternative building block interaction according to an embodiment of the present invention, and in an alternative implementation manner, as shown in fig. 5, a building block graph shown in a screen may be "please spell out" through an interaction information displayed on an intelligent interaction device, so as to guide a child to accurately find a corresponding building block in different building blocks, and spell out a corresponding building block shape according to a building block shape prompt in the screen.
It should be noted that the first image is acquired by an image acquisition device in communication with the intelligent interaction device, the image acquisition device may be an internal camera or an external camera, and fig. 2 to 5 illustrate the external camera as an example, where a dotted line area in the figures is an image acquisition area of the intelligent interaction device.
As an alternative embodiment, the method further includes: acquiring a second image, wherein the second image comprises a plurality of interactive objects to be operated; and identifying attribute information of the interactive object to be operated based on the second image.
The interactive objects to be operated in the second image are all interactive objects, for example, when the interactive objects are building blocks, the interactive objects to be operated are a whole set of building blocks.
The second image can be acquired through a camera device of the intelligent interaction equipment. For example, as shown in fig. 6, before the start of the block game process, all blocks are shot by a camera of the intelligent interaction device, so as to obtain attribute information of all blocks in fig. 6, and implement registration and identification of all blocks.
The attribute information of the interactive object may include color information, contour information, and shape information of the interactive object. Referring again to FIG. 6, in an alternative embodiment, the color, outline, and shape of each block may be identified based on the second image.
As an alternative embodiment, the attribute information of the interactive object includes color information of the interactive object, and the identifying the attribute information of the interactive object to be operated based on the second image includes: performing color segmentation on the second image to obtain a sub-image corresponding to each interactive object to be operated; converting the subimage into an HSV space to obtain a first parameter of the subimage in a hue channel and a second parameter of a saturation channel; and determining color information of the interactive object to be operated based on the first parameter and the second parameter.
Because the image collected by the camera is an RGB three-channel image, RGB values indicate color intensity of a corresponding channel, and a picture color range cannot be embodied, the sub-image is converted into an HSV (hue, saturation and brightness) space, and the color characteristics of the image can be clearly expressed through an H component of the HSV image space. The first parameter of the hue channel is used to represent the hue of the sub-image, and the second parameter of the saturation channel is used to represent the saturation of the sub-image.
In an alternative embodiment, when converting the subimage into the HSV space, the preset value H is set for the hue channel H color The saturation channel S is set with a preset value S color And judging the color interval of the pixel value of each sub-image to obtain the final color area of the sub-image, wherein the color interval of the pixel value of each sub-image in the sub-image is judged according to the formula:
Figure BDA0003167591820000071
if the hue channel H and the saturation channel S of a certain sub-image pixel value have a value H i 、s i If the above judgment formula is satisfied, the color of a certain sub-image of the interactive object is considered to be a certain preset color. As shown in fig. 7a, 7b, 7c and 7d, areas of all blue, green, yellow and orange building blocks in fig. 6 are obtained, respectively.
Meanwhile, in order to deal with the problem of color deviation caused by illumination in different environments, when a first parameter of a hue channel and a second parameter of a saturation channel of a certain sub-image pixel are determined, the color value of an interactive object to be operated is calibrated, specifically, a secondary average value is obtained for the first parameter and the second parameter of all pixel values in each sub-image, and then the final color information of the interactive object to be operated is obtained.
When the second image is color-divided, the second image is automatically color-divided based on the color information of the interactive object.
As an alternative embodiment, the attribute information of the interactive object includes contour information of the interactive object, and the attribute information of the interactive object to be operated is identified based on the second image, and the method further includes: performing expansion processing on the sub-image to obtain an expanded sub-image; corroding the expanded subimage to obtain a corroded subimage; and carrying out contour detection on the sub-image subjected to corrosion treatment to obtain contour information of the interactive object to be operated.
After the color segmentation is performed on the second image, the shape of the interactive object needs to be further extracted, so as to obtain the contour information of the interactive object to be operated.
In an alternative embodiment, when the sub-image is subjected to the dilation processing, the dilation processing may be performed by scanning each element in the sub-image with a dilation structure element, and performing an and operation with each pixel in the dilation structure element and the pixel covered by the dilation structure element, if both are 0, the pixel is 0, otherwise, the pixel is 1; by this dilation operation, a dilated sub-image can be obtained.
In an alternative embodiment, when performing erosion processing on the expanded sub-image, the eroded kernel may be selected to have a larger value, as shown in fig. 7c, and after the expansion and erosion processing, the image of fig. 8a is obtained. The etching treatment can be completed by scanning each element in the expanded sub-image by using an etching structural element, and performing an and operation by using each pixel in the etching structural element and a pixel covered by the etching structural element, wherein if the etching structural element and the pixel are both 1, the pixel is 1, otherwise, the pixel is 0; by this etching operation, a sub-image after etching treatment can be obtained.
In an alternative embodiment, when performing contour detection on the sub-image after the erosion process, contour detection may be performed on the sub-image after erosion by using one of a sobel operator, a Prewitt operator, a Canny operator, and a Laplacian operator, so as to obtain the contour of the building block shown in fig. 8 b.
As an alternative embodiment, the attribute information of the interactive object includes shape information of the interactive object, and the attribute information of the interactive object to be operated is identified based on the second image, and the method further includes: and detecting the geometric characteristics of the outline information of the interactive object to be operated to obtain the shape information of the interactive object to be operated.
The geometric characteristics may include characteristics such as a volume, a geometric center, and a surface area of the interactive object, and the shape information of the interactive object to be operated in the second image is determined by performing corresponding geometric characteristic detection on the contour information.
As an optional embodiment, the interaction prompt information is further used for indicating an operation area, and extracting operation result information of the interaction subject performing a preset operation from the first image of the interaction object, and the operation result information includes: acquiring an area image corresponding to an operation area in a first image; and identifying the attribute information of the interactive object in the area image to obtain operation result information.
The indication operation area represents an area where the interaction subject needs to place the interaction object according to the interaction prompt information. In the scheme, the interaction not only displays the interaction prompt information, but also displays the image acquired by the image acquisition device in real time, and the interaction prompt information can indicate a part of area in the image acquired by the image acquisition device as an operation area so as to identify the operation result from the area.
Fig. 9 is a schematic diagram of a sixth alternative building block interaction according to an embodiment of the present invention, and in an alternative implementation manner, in combination with fig. 9, the interaction prompt message is: and (3) please place the found black building blocks in a dotted line frame in the screen, namely the interaction prompt information indicates that the interaction main body places the black building blocks in a specified area, and when the intelligent interaction equipment interacts with the interaction main body under the interaction prompt information, operation result information can be extracted from the indicated operation area. Still taking fig. 9 as an example, only the attribute information of the interactive object in the frame shown in the screen can be acquired, and the operation result information is further acquired.
As an alternative embodiment, the evaluation parameter for performing the preset operation on the interaction subject is determined based on the operation result information, including; acquiring standard operation result information corresponding to the interactive prompt information; acquiring a matching degree parameter between operation result information extracted from a first image of an interactive object and standard operation result information; and determining the matching degree parameter as an evaluation parameter.
The standard operation result information can be set on the intelligent interaction device in a preset mode, and when the evaluation parameters for executing the preset operation on the interaction subject are determined based on the operation result information, the standard operation result information can be obtained from the intelligent interaction device. The standard operation result information is information that the interactive main body completes a correct preset operation result on the interactive object according to the interactive prompt information.
The matching degree parameter is used for representing the accuracy degree of the interaction subject executing the preset operation.
In an optional implementation manner, the interactive object is a building block, the operation result information of the preset operation may include operation result information of all rectangular building blocks placed at a specified position and having a black color and the same size, and the matching degree parameter indicates, by means of multiple levels, an accuracy degree of the preset operation performed by the interactive subject. Specifically, the matching degree parameter between the operation result information extracted from the first image of the interactive object and the standard operation result information may be acquired. And three levels of "excellent", "good" and "poor" are taken as examples, when the matching degree parameter is the "excellent" level, it indicates that the interaction subject has correctly completed all preset operations on the interaction object according to the interaction prompt information; when the matching degree parameter is in a 'good' grade, the interaction main body is shown to finish partial preset operation on the interaction object according to the interaction prompt information; and when the matching degree parameter is 'poor', the interactive main body finishes a small part of preset operation on the interactive object according to the interactive prompt information. Wherein, all the preset operations, part of the preset operations and part of the preset operations are relative.
As an alternative embodiment, after extracting operation result information of the interaction subject performing the preset operation from the first image of the interaction object, and determining an evaluation parameter of the preset operation performed on the interaction subject based on the operation result information, the method further includes: judging whether the evaluation parameters meet preset conditions or not; under the condition that the evaluation parameters meet preset conditions, at least one item of information is sent to a designated terminal: the method comprises the steps of interaction prompt information, a first image, evaluation parameters and video information of an interaction object operated by an interaction main body, wherein the video information is collected in the process of the interaction object operated by the interaction main body.
The preset conditions can be determined according to actual requirements. For example, it may be that the evaluation parameter is higher than a first preset value, the evaluation parameter is lower than a second preset value, and so on. Above-mentioned appointed terminal can be the head of a child's smart machine to the head of a child knows children's recreation condition.
In an optional implementation manner, the interactive object is a building block, the interactive subject is a student, and the designated terminal may be an intelligent terminal held by a teacher or an intelligent terminal held by a parent of a child. Taking an intelligent terminal held by a teacher as an example of a designated terminal, the intelligent interaction device can be provided with a student client, the intelligent terminal is provided with a teacher client, and when the evaluation parameter meets a preset condition, the student client can evaluate and feed back the interaction object according to the interaction prompt information, the first image, the evaluation parameter and the video information of the interaction object operated by the interaction main body. Meanwhile, the student client sends the interaction prompt information, the first image, the evaluation parameter and the video information of the interaction subject operation interaction object to the teacher client, and the teacher can assist in judging the learning condition of the specific ability of the student according to the information acquired by the teacher client.
In another optional implementation manner, the accuracy that the interaction subject performs the preset operation according to the interaction prompt information may be set, that is, the second preset value is set, and in the case that the evaluation parameter is lower than the second preset value, the intelligent interaction device may send the current operation condition of the child to the parent of the child.
FIG. 10 is a flowchart of an alternative intelligent interactive device according to an embodiment of the present invention, in this example, the interactive object is still a building block. As shown in fig. 10, the application program of the building block game is installed on the intelligent interaction device, and before starting the building block game, building block registration and matching are required, specifically, the whole set of building blocks may be shot by the image acquisition device, and the attribute information of all the building blocks, such as color information, contour information, and shape information, may be obtained by color segmentation, image preprocessing, contour detection, and geometric characteristic calculation of the contour information. After the registration is carried out once, under the condition of using the set of building block games, the subsequent games can be directly played by using the registration result of the first time without carrying out registration every time; when a new block needs to be added, a new record can be registered.
After carrying out building block registration matching, specific mutual prompt information is broadcast according to the mutual procedure of predetermineeing in the screen to intelligent mutual equipment, children begin to build the building block according to mutual prompt information, at the in-process that children built the building block, video information when intelligent mutual equipment record children built the building block, after children built the building block and accomplished, the building block of building block to children was evaluateed, and give back mutual prompt information, first image, the video information who assesses parameter and children's operation building block to the head of a family or mr and carry out supplementary judgement.
Example 2
Fig. 11 is a schematic diagram of a control apparatus of an intelligent interactive device according to an embodiment of the present invention, which, in conjunction with fig. 11, can perform relevant steps in the embodiment of the present invention, and the apparatus includes:
the display module 1100 is configured to display interaction prompt information, where the interaction prompt information is used to instruct an interaction subject to complete a preset operation on an interaction object;
a receiving module 1102, configured to receive a completion instruction, and collect a first image of an interactive object;
the determining module 1104 is configured to extract operation result information of the interaction subject performing the preset operation from the first image of the interaction object, and determine an evaluation parameter of the interaction subject performing the preset operation based on the operation result information, where the evaluation parameter is used to represent an accuracy degree of the interaction subject performing the preset operation.
As an optional embodiment, the interactive object is a building block, and the interactive prompt information includes at least one of the following types: interactive prompt information for distinguishing the shapes of the building blocks; interactive prompt information for distinguishing the sizes of the building blocks; and interactive prompt information for distinguishing the colors of the building blocks.
As an alternative embodiment, the apparatus further comprises:
the acquisition module is used for acquiring a second image, wherein the second image comprises a plurality of interactive objects to be operated;
and the first identification module is used for identifying the attribute information of the interactive object to be operated based on the second image.
As an alternative embodiment, the attribute information of the interactive object includes color information of the interactive object, and the first identifying module includes:
the segmentation module is used for carrying out color segmentation on the second image to obtain a sub-image corresponding to each interactive object to be operated;
the conversion module is used for converting the sub-image into an HSV space to obtain a first parameter of the sub-image in a hue channel and a second parameter of a saturation channel;
and the determining module is used for determining the color information of the interactive object to be operated based on the first parameter and the second parameter.
As an alternative embodiment, the attribute information of the interactive object includes contour information of the interactive object, and the first identification module further includes:
the expansion module is used for performing expansion processing on the sub-image to obtain an expanded sub-image;
the corrosion module is used for carrying out corrosion treatment on the expanded subimage to obtain the subimage after the corrosion treatment;
and the first detection module is used for carrying out contour detection on the sub-image subjected to corrosion treatment to obtain contour information of the interactive object to be operated.
As an alternative embodiment, the attribute information of the interactive object includes shape information of the interactive object, and the first identification module further includes:
and the second detection module is used for detecting the geometric characteristics of the outline information of the interactive object to be operated to obtain the shape information of the interactive object to be operated.
As an optional embodiment, the interaction prompt information is further used for indicating an operation area, and extracting operation result information of the interaction subject performing a preset operation from the first image of the interaction object, and the operation result information includes:
the first acquisition module is used for acquiring a region image corresponding to the operation region in the first image;
and the second identification module is used for identifying the attribute information of the interactive object in the area image to obtain operation result information.
The first determining submodule is used for determining evaluation parameters for executing preset operation on the interaction subject based on the operation result information, and comprises;
the second acquisition module is used for acquiring standard operation result information corresponding to the interactive prompt information;
the third acquisition module is used for acquiring a matching degree parameter between the operation result information extracted from the first image of the interactive object and the standard operation result information;
the second determined matching degree parameter is an evaluation parameter.
In an optional embodiment, after the determining, the apparatus further includes:
the judging module is used for judging whether the evaluation parameters meet preset conditions or not;
the sending module is used for sending at least one item of information to the specified terminal under the condition that the evaluation parameters meet the preset conditions: the method comprises the steps of interaction prompt information, a first image, an evaluation parameter and video information of an interaction object operated by an interaction main body, wherein the video information is collected in the process of the interaction object operated by the interaction main body.
Example 3
According to an embodiment of the present application, there is provided an embodiment of a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of embodiment 1 or embodiment 2.
Example 4
According to an embodiment of the present application, there is provided an intelligent interactive tablet, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of embodiment 1 or embodiment 2.
Fig. 12 is a schematic diagram of an intelligent interaction tablet according to an embodiment of the present application, where the intelligent interaction tablet includes the interaction device main body and the touch frame, and as shown in fig. 12, an intelligent interaction tablet 1300 may include: at least one processor 1301, at least one network interface 1304, a user interface 1303, memory 1305, at least one communication bus 1302.
Wherein a communication bus 1302 is used to enable the connective communication between these components.
The user interface 1303 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1303 may also include a standard wired interface and a wireless interface.
The network interface 1304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Processor 1301 may include one or more processing cores, among other things. The processor 1301 connects various parts throughout the intelligent interactive tablet 1300 using various interfaces and lines, and performs various functions of the intelligent interactive tablet 1300 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1305 and calling data stored in the memory 1305. Optionally, the processor 1301 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1301 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 1301, but may be implemented by a single chip.
The Memory 1305 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1305 includes a non-transitory computer-readable medium. The memory 1305 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1305 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1305 may optionally be at least one memory device located remotely from the processor 1301. As shown in fig. 12, the memory 1305, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an operating application of the smart interactive tablet.
In the smart interactive tablet 1300 shown in fig. 12, the user interface 1303 is mainly used to provide an input interface for a user to obtain data input by the user; and the processor 1301 may be configured to call an operation application of the smart interactive tablet stored in the memory 1305, and to perform any one of the operations of embodiment 1 or embodiment 2 in detail.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A control method of intelligent interaction equipment is characterized by comprising the following steps:
displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object;
receiving a completion instruction, and acquiring a first image of the interactive object;
extracting operation result information of the interaction subject for executing the preset operation from the first image of the interaction object, and determining an evaluation parameter for executing the preset operation on the interaction subject based on the operation result information, wherein the evaluation parameter is used for representing the accuracy of the interaction subject for executing the preset operation.
2. The method of claim 1, wherein the interactive object is a building block, and the interactive prompt message comprises at least one of the following types:
interactive prompt information for distinguishing the shapes of the building blocks;
interactive prompt information for distinguishing the sizes of the building blocks; and
and the interactive prompt information is used for distinguishing the colors of the building blocks.
3. The method of claim 2, further comprising:
acquiring a second image, wherein the second image comprises a plurality of interactive objects to be operated;
and identifying attribute information of the interactive object to be operated based on the second image.
4. The method of claim 3, wherein the attribute information of the interactive object comprises color information of the interactive object, and identifying the attribute information of the interactive object to be operated based on the second image comprises:
performing color segmentation on the second image to obtain a sub-image corresponding to each interactive object to be operated;
converting the sub-image into an HSV space to obtain a first parameter of the sub-image in a hue channel and a second parameter of a saturation channel;
determining color information of the interactive object to be operated based on the first parameter and the second parameter.
5. The method of claim 4, wherein the attribute information of the interactive object comprises contour information of the interactive object, and the attribute information of the interactive object to be operated is identified based on the second image, further comprising:
performing expansion processing on the sub-image to obtain an expanded sub-image;
corroding the expanded subimage to obtain a corroded subimage;
and carrying out contour detection on the sub-image subjected to the corrosion treatment to obtain contour information of the interactive object to be operated.
6. The method of claim 4, wherein the attribute information of the interactive object includes shape information of the interactive object, and the attribute information of the interactive object to be operated is identified based on the second image, further comprising:
and detecting the geometric characteristics of the outline information of the interactive object to be operated to obtain the shape information of the interactive object to be operated.
7. The method according to claim 1, wherein the interaction prompt information is further used for indicating an operation area, and the extracting operation result information of the interaction subject performing the preset operation from the first image of the interaction object includes:
acquiring a region image corresponding to the operation region in the first image;
and identifying the attribute information of the interactive object in the area image to obtain the operation result information.
8. The method according to claim 1, wherein determining an evaluation parameter for performing the preset operation on the interaction subject based on the operation result information comprises;
acquiring standard operation result information corresponding to the interaction prompt information;
acquiring a matching degree parameter between operation result information extracted from a first image of the interactive object and the standard operation result information;
and determining the matching degree parameter as the evaluation parameter.
9. The method according to claim 1, wherein after extracting operation result information of the interaction subject performing the preset operation from the first image of the interaction object, and determining an evaluation parameter of the preset operation performed on the interaction subject based on the operation result information, the method further comprises:
judging whether the evaluation parameters meet preset conditions or not;
under the condition that the evaluation parameters meet the preset conditions, at least one item of information is sent to a designated terminal: the interactive prompt information, the first image, the evaluation parameter and the video information of the interactive object operated by the interactive main body are acquired in the process of operating the interactive object by the interactive main body.
10. A control device of intelligent interaction equipment is characterized by comprising:
the display module is used for displaying interaction prompt information, wherein the interaction prompt information is used for indicating an interaction subject to complete preset operation on an interaction object;
the receiving module is used for receiving a finishing instruction and acquiring a first image of the interactive object;
the determining module is configured to extract operation result information of the interaction subject performing the preset operation from the first image of the interaction object, and determine an evaluation parameter of the interaction subject performing the preset operation based on the operation result information, where the evaluation parameter is used to represent an accuracy degree of the interaction subject performing the preset operation.
11. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any of claims 1 to 9.
12. An intelligent interactive tablet, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 9.
CN202110809229.XA 2021-07-16 2021-07-16 Control method and device of intelligent interaction equipment and intelligent interaction panel Pending CN115607978A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809229.XA CN115607978A (en) 2021-07-16 2021-07-16 Control method and device of intelligent interaction equipment and intelligent interaction panel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809229.XA CN115607978A (en) 2021-07-16 2021-07-16 Control method and device of intelligent interaction equipment and intelligent interaction panel

Publications (1)

Publication Number Publication Date
CN115607978A true CN115607978A (en) 2023-01-17

Family

ID=84856211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809229.XA Pending CN115607978A (en) 2021-07-16 2021-07-16 Control method and device of intelligent interaction equipment and intelligent interaction panel

Country Status (1)

Country Link
CN (1) CN115607978A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118035689A (en) * 2024-04-11 2024-05-14 中国信息通信研究院 Intelligent equipment online operation system based on real-time three-dimensional model reconstruction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118035689A (en) * 2024-04-11 2024-05-14 中国信息通信研究院 Intelligent equipment online operation system based on real-time three-dimensional model reconstruction

Similar Documents

Publication Publication Date Title
US9669312B2 (en) System and method for object extraction
CN107105310B (en) Human image replacing method and device in video live broadcast and recording and broadcasting system
CN109176535B (en) Interaction method and system based on intelligent robot
CN105868282A (en) Method and apparatus used by deaf-mute to perform information communication, and intelligent terminal
US10643090B2 (en) Method and device for guiding users to restore Rubik's cube
CN106355592B (en) Educational toy set, circuit element thereof and wire identification method
CN106200918A (en) A kind of method for information display based on AR, device and mobile terminal
CN115607978A (en) Control method and device of intelligent interaction equipment and intelligent interaction panel
CN114519889A (en) Cover image detection method and device for live broadcast room, computer equipment and medium
CN112132750B (en) Video processing method and device
CN113918074A (en) Light and shadow entity interaction device and method for cognitive teaching of autism children
CN115690635A (en) Video processing method and device, computer storage medium and intelligent interactive panel
CN108040239B (en) Knowledge training system and method based on image recognition
CN106530013A (en) Advertisement push method and apparatus
CN110796740A (en) Security protection method, system and readable storage medium based on AR game
CN111655148B (en) Heart type analysis method based on augmented reality and intelligent equipment
CN106709959B (en) method and device for recognizing chocolate plate and electronic equipment
CN112691380B (en) Game resource material auditing method and device, storage medium and computer equipment
CN113893548A (en) Game resource material auditing method and device, storage medium and computer equipment
CN107025674B (en) Method and system for displaying painting identification picture based on augmented reality
CN106339397A (en) Method, device and terminal for selecting fruits
CN115705628A (en) Image processing method, apparatus and storage medium
CN115756159A (en) Display method and device
KR20210092161A (en) A method of providing racing game contents by 360 degree enhancement of reality recognition on the side of the vehicle
CN114357213A (en) Traditional clothing image interaction method and system based on AR technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination