CN112201116B - Logic board identification method and device and terminal equipment - Google Patents

Logic board identification method and device and terminal equipment Download PDF

Info

Publication number
CN112201116B
CN112201116B CN202011046745.3A CN202011046745A CN112201116B CN 112201116 B CN112201116 B CN 112201116B CN 202011046745 A CN202011046745 A CN 202011046745A CN 112201116 B CN112201116 B CN 112201116B
Authority
CN
China
Prior art keywords
target
color
movable member
page
target movable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011046745.3A
Other languages
Chinese (zh)
Other versions
CN112201116A (en
Inventor
王玥
顾景
程骏
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202011046745.3A priority Critical patent/CN112201116B/en
Publication of CN112201116A publication Critical patent/CN112201116A/en
Application granted granted Critical
Publication of CN112201116B publication Critical patent/CN112201116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0023Colour matching, recognition, analysis, mixture or the like
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The invention is suitable for the technical field of artificial intelligence, and provides a logic board identification method, a logic board identification device and terminal equipment, wherein a detection result output by a trained neural network model is obtained by inputting an image into the trained neural network model to detect a picture book page, a movable part and an answer position in the image; identifying a target picture book page in the detection result; when the target plotting page is successfully identified, identifying the target movable piece, the color of the target movable piece and the position of the target movable piece in the detection result; according to the answer positions in the detection results, the colors of the target movable pieces and the positions of the target movable pieces, the execution results of the logic training tasks in the plotting book page executed by the user are obtained and output, whether the user correctly executes the logic training tasks placed in the plotting book page of the logic board can be identified by utilizing image identification and a machine learning method based on a neural network model, the identification efficiency can be effectively improved, and manpower is saved.

Description

Logic board identification method and device and terminal equipment
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a logic board identification method and device and terminal equipment.
Background
The logic board is a teaching aid for training the logic thinking ability of infants, and is provided with an area for placing a picture page and a plurality of movable parts, wherein the picture page is provided with a logic training task for matching images in the picture page with the movable parts with relevant colors, and the movable parts with relevant colors are required to be moved to the appointed position of the logic board by a user to execute the logic training task. At present, parents usually need to accompany when infants use the logic board, and the parents help to judge whether the logic training task is correctly executed, so that the efficiency is low and the manpower is wasted.
Disclosure of Invention
In view of this, embodiments of the present invention provide a logic board identification method, an apparatus, and a terminal device, so as to solve the problems in the prior art that an infant usually needs to be accompanied by a parent when using a logic board, and the parent helps to determine whether a pairing task is correctly performed, which is inefficient and wastes manpower.
A first aspect of an embodiment of the present invention provides a logic board identification method, where the logic board includes a first area, a second area, and K movable elements that are the same in shape and different in color, the first area is used for placing a drawing page, the second area is provided with 2 × K placement positions, and each placement position is used for placing one movable element, where the logic board identification method includes:
acquiring an image of the logic board, inputting the image into a trained neural network model to detect a picture book page, a movable part and an answer position in the image, and acquiring a detection result output by the trained neural network model; wherein, the answer position is a placing position where the movable piece is positioned when the user correctly executes the logic training task in the page of the drawing book;
identifying a target picture book page in the detection result;
when the target graphic book page is successfully identified, identifying a target movable member, the color of the target movable member and the position of the target movable member in the detection result;
and acquiring an execution result of the logic training task executed by the user in the target plotting book page according to the answer position in the detection result, the color of the target movable member and the position of the target movable member, and outputting the execution result.
A second aspect of an embodiment of the present invention provides a logic board recognition apparatus, where the logic board includes a first area, a second area, and K movable members with the same shape and different colors, the first area is used for placing a drawing page, the second area is provided with 2 × K placement positions, each placement position is used for placing one movable member, the logic board recognition apparatus includes:
the image detection module is used for acquiring the image of the logic board, inputting the image into the trained neural network model to detect the picture book page, the movable part and the answer position in the image and acquiring the detection result output by the trained neural network model; wherein, the answer position is a placing position where the movable piece is positioned when the user correctly executes the logic training task in the page of the drawing book;
the first identification module is used for identifying a target picture book page in the detection result;
the second identification module is used for identifying a target movable member, the color of the target movable member and the position of the target movable member in the detection result when the target plotting book page is successfully identified;
and the result output module is used for acquiring and outputting the execution result of the logic training task executed by the user in the eye plotting book page according to the answer positions in the detection result, the color of the target movable member and the position of the target movable member.
A third aspect of the present invention provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and further includes a camera or is in communication connection with the camera, and when the processor executes the computer program, the steps of the logic board identification method according to the first aspect of the present invention are implemented.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the logic board identification method according to the first aspect of the embodiments of the present invention.
A first aspect of an embodiment of the present invention provides a method for identifying a logic board, where an image is input into a trained neural network model to detect a picture book page, a movable part, and an answer bit in the image, and a detection result output by the trained neural network model is obtained; then identifying a target picture book page in the detection result; when the target plotting page is successfully identified, identifying the target movable piece, the color of the target movable piece and the position of the target movable piece in the detection result; and finally, acquiring an execution result of the logic training task in the user execution target book page and outputting the execution result according to the answer bit in the detection result, the color of the target movable piece and the position of the target movable piece, and identifying whether the user correctly executes the logic training task placed in the book page of the logic board or not by replacing manual work with an artificial intelligence technology and using an image identification and neural network model-based machine learning method, so that the identification efficiency can be effectively improved, and the manpower can be saved.
It is to be understood that, for the beneficial effects of the second aspect to the fourth aspect, reference may be made to the relevant description in the first aspect, and details are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a logic board according to an embodiment of the present invention;
FIG. 2 is a first flowchart of a logic board recognition method according to an embodiment of the present invention;
fig. 3 and 4 are schematic diagrams of relative positional relationships between a terminal device and a logic board according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an execution result output by a terminal device according to an embodiment of the present invention;
FIG. 6 is a second flowchart of a logic board recognition method according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of an image of a logic board after annotation as provided by an embodiment of the present invention;
FIG. 8 is a third flowchart illustrating a logic board identification method according to an embodiment of the present invention;
FIG. 9 is a fourth flowchart illustrating a logic board identification method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a logic board identification apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiment of the application provides a logic board includes first region, second region and K the same and different movable pieces of colour of shape, and first region is used for placing the picture page, and the second region is provided with 2K and places the position, and every is placed the position and is used for placing a movable piece, and wherein, K is the positive integer.
In the application, the shape and the size of the logic board, the first area and the second area can be set according to actual needs, the logic board needs to be placed enough and limit all movable parts and at least one page of picture book page, the first area needs to be placed enough and limit at least one page of picture book page, and the second area needs to be placed enough and limit all movable parts.
In application, the logic board may be rectangular, circular, oval, cartoon-shaped or ergonomically-designed regular shape, the first region includes a recess or a position-limiting structure for placing and limiting at least one page of the picture, the recess has a shape the same as the at least one page of the picture and a size greater than or equal to the size of the at least one page of the picture.
In application, the placement site is fixedly disposed in the second region and is not movable. The placing position can be a groove, a convex point, a convex edge or a limiting structure for placing and limiting the movable piece, and the structure at the bottom of the movable piece is matched with the structure of the placing position, for example, a building block type matching structure can be adopted.
In use, each movable member is movable and placed in either of the placement positions. The shape of the movable member can be rectangular, triangular, circular, oval, cartoon-shaped or any other regular shape. The number of K can be set to any positive integer according to actual needs, that is, the logic board should include at least 1 movable element and at least 2 placement bits. The K colors can be any different colors which can be distinguished clearly and are convenient to identify, for example, when the K is 6, the 6 colors can be red, orange, yellow, green, cyan, blue and purple. When the movable member is polygonal, the size of the movable member includes the side length of each side and also includes the area; when the movable member is in the shape of a curved line such as a circle or an ellipse, its size includes a diameter, and may include a circumference or an area.
In one embodiment, the second region is provided with slide rails and K movable members of K colors which can move along the slide rails, the slide rails include K first slide rails arranged at intervals in a first direction and second slide rails arranged in a second direction perpendicular to the first direction, the second slide rails are used for communicating the middle points of the K first slide rails, and two end points of each first slide rail are used for placing positions.
In application, the first slide rail and the second slide rail may be slide bars or slide grooves. The first direction may be a direction parallel to the first edge of the page of the picture book when the picture book is placed in the first area, and the second direction may be a direction parallel to the second edge of the page of the picture book.
As shown in fig. 1, an exemplary structure of a logic board 1 is shown, which includes a rectangular first area 10, a rectangular second area 20, and 6 movable members 30 with colors of red, orange, yellow, green, cyan, blue, and purple respectively, where the second area 20 is provided with 12 placing positions 21, 6 segments of first sliding rails 22 arranged at intervals in a first direction, and 1 segment of second sliding rails 23 arranged along a second direction; the logic board 1 is rectangular, the first direction is a direction parallel to the first edge of the first area 10, the second direction is a direction parallel to the second edge of the first area 10, the first slide rail 22 and the second slide rail 23 are both slide grooves, one end point of each section of the first slide rail 22 is used as a placing position 21, the first direction is perpendicular to the second direction, and the arrow "→" direction is the first direction.
It should be understood that different colored movable members are represented in fig. 1 by different fill patterns, respectively; the end point shape of the slide rail can also be set according to actual needs, is not limited to be U-shaped, and can also be rectangular, circular or other shapes which are convenient to be matched with the bottom of the movable piece.
In application, the drawing page is used for training logical thinking ability of a user. The second area is a question answering area, after a user reads a page of a drawing book placed in the first area, the user needs to answer questions according to the logic training tasks in the page of the drawing book, and the question answering mode is that the movable piece is moved to a corresponding placing position (namely a question answering position) according to the requirements of the logic training tasks, and the logic training tasks are executed. The user can be an infant, a person with low intelligence or any person needing logical thinking ability training, or a guardian of a group needing logical thinking ability training. For users with low reading ability, the voice broadcasting function of the terminal equipment can be started through any human-computer interaction mode supported by the terminal equipment, so that the terminal equipment can identify a picture page placed in the first area and broadcast the content and logic training tasks in the picture page by voice. After the user executes the logic training task, the logic board recognition function of the terminal equipment can be started through any human-computer interaction mode supported by the terminal equipment so as to recognize whether the user correctly executes the logic training task and output an execution result. The terminal equipment can support human-computer interaction modes such as voice control, gesture control, entity key control or touch key control and the like.
The embodiment of the present application provides a logic board identification method for identifying the logic board, which may be applied to a robot, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR) device, a desktop computer, a palmtop computer, a notebook computer, a netbook, a Personal Digital Assistant (PDA), and other terminal devices that include a camera or may be in communication connection with the camera, and is specifically executed by a processor of the terminal device when running a computer program with corresponding functions. The robot can be specifically education robot, and the size and the outward appearance of robot can set up according to actual need, for example, the robot can be with high or miniaturized bionic robot such as humanoid robot, machine cat, robot dog of material object. The embodiment of the present application does not set any limit to the specific type of the terminal device.
As shown in fig. 2, the logic board identification method provided in the embodiment of the present application includes the following steps S201 to S204:
step S201, obtaining the image of the logic board, inputting the image into the trained neural network model to detect the picture book page, the movable part and the answer position in the image, and obtaining the detection result output by the trained neural network model.
In application, the terminal equipment acquires the image of the logic board through the camera, and the position of the camera needs to be reasonably set so that the logic board is located in the visual angle range of the camera, and the visual angle of the camera needs to completely cover the area where the logic board is located. The answer position is a placing position where the movable piece is located when the user correctly executes the logic training task in the drawing book page. A plurality of images corresponding to the picture book page can be obtained in advance, the neural network model is trained, and the neural network model is trained to detect the capacity of the picture book page, the movable part and the answering position in each image until the neural network model is converged, so that the trained neural network model is obtained. The detection result comprises a picture book page, a movable piece and an answer position in the image of the logic board, and also comprises the position of the picture book page, the color and the position of the movable piece and the position of the answer position.
Fig. 3 shows an exemplary view angle θ 1 of the camera 201 of the terminal device 2 in the first direction of the logic board 1, which completely covers the first side of the logic board 1.
Fig. 4 exemplarily shows that the viewing angle θ 2 of the camera 201 located in the terminal device 2 in the second direction of the logic board 1 completely covers the second edge of the logic board 1.
And S202, identifying a target drawing page in the detection result.
In the application, the target page is an image area of a page of the page, which is placed in the first area of the logic board by a user, in the logic board image. Because the similar drawing pages may exist in the drawing pages detectable by the neural network model, at least two drawing pages may be detected by the neural network model, and at this time, the target drawing pages need to be screened from the detection result; when only one drawing page is detected by the neural network model, one drawing page in the detection result can be directly determined as a target drawing page; the neural network model may also not detect any sketch pages.
In application, when the detection result includes at least two drawing pages, the terminal device may adopt an image recognition method to extract an image of a first region in the image of the logic board to match with each drawing page in the detection result, and determine the drawing page with the maximum matching degree with the image of the first region in the image of the logic board as the target drawing page.
In one embodiment, after step S201, the method includes:
when the detection result does not comprise the picture book page, outputting first prompt information for indicating that the picture book page fails to be detected;
after waiting for the first preset time, returning to step S201;
when the detection failure times of the target picture book page reach a first preset time, uploading the image to a server so as to detect the target picture book page in the image through the server;
acquiring a detection result of the server;
and when the detection result of the server does not include the drawing page, outputting first prompt information indicating that the detection of the drawing page fails, and returning to the step S201.
In the application, the detection result does not include the picture book page, which indicates that the user does not correctly place the picture book page in the first area or stop shielding the picture book page. After the first prompt message is output, waiting for a first preset time for a user to correctly place the picture page in the first area or stop shielding the picture page, then obtaining the image of the logic board again and inputting the trained neural network model for detection, and after repeating the first preset times, if the detection result still does not include the picture page, uploading the image of the logic board to a server. And detecting the picture book page, the movable piece and the answer position in the image of the logic board through the trained neural network model in the server, and sending the detection result to the terminal equipment. The server comprises at least one trained neural network model which is obtained by utilizing a plurality of images corresponding to a large number of drawing pages to train in advance. The first preset time and the first preset times can be set according to actual needs, for example, the first preset time is 30S, and the first preset times is 3 times.
Step S203, when the target book page is successfully identified, identifying the target movable member, the color of the target movable member and the position of the target movable member in the detection result.
In an application, after the target drawing page is identified, the target movable member can be further identified. The target movable member is an image area of the movable member in the image of the logic board where the user places the second area. Under the influence of ambient light, the number of movable parts of the same color possibly detected by the neural network model is more than 1, and at the moment, a target movable part needs to be screened out from a detection result; when the neural network model only detects that the number of the movable parts of each color is one, the movable parts in the detection result can be directly determined as target movable parts; the neural network model may also not detect any movable parts.
In one embodiment, after step S202, the method includes:
when the detection result does not comprise the movable piece, outputting second prompt information representing that the detection of the target movable piece fails;
after waiting for the second preset time, returning to step S201;
when the number of times of detection failure of the target movable member reaches a second preset number, entering a standby mode, or after waiting for a third preset time, returning to step S201.
In application, when the detection result does not include the movable piece, it indicates that the user does not place any movable piece in the second area or the second area is blocked, and at this time, the terminal device may output a second prompt message. The second prompt message is used for prompting the user to place the movable member in the second area or stop shielding the second area again according to the requirement of the logic training task. After the second prompt message is output, waiting for a second preset time for the user to place the movable piece in the second area or stop shielding the second area, and then acquiring the image of the logic board again and inputting the trained neural network model for detection, so that after repeating the second preset time, if the detection result still does not include the movable piece, the user temporarily leaves or the logic capability is poor, at this time, entering a standby mode to wait for the user to start the logic recognition function of the terminal device again, or, after waiting for a longer third preset time, executing step S201 again to detect the movable piece. The second preset time, the second preset number of times, and the third preset time may be set according to actual needs, for example, the second preset time is 5 minutes, the second preset number of times is 3 times, and the third preset time is 10 minutes. The relationship among the first preset time, the second preset time and the third preset time may be that the first preset time is less than the second preset time and less than the third preset time.
And S204, acquiring and outputting an execution result of the logic training task executed by the user in the eye plotting book page according to the answer position in the detection result, the color of the target movable member and the position of the target movable member.
In application, when a user correctly executes a logic training task in a drawing page, the required color of the movable piece placed at each answer position is known, so that whether the user correctly places the movable piece with the specified color at the specified answer position according to the requirement of the logic training task can be determined by comparing the color of the target movable piece and the position of the target movable piece with the known color of the movable piece placed at each answer position, and the execution result of the user executing the logic training task can be obtained and output.
In one embodiment, step S204 includes:
obtaining the color of the target movable member of the answer position placed in the detection result according to the answer position in the detection result, the color of the target movable member and the position of the target movable member to obtain an answer result;
comparing the answer result with a known correct answer result to obtain an execution result of the logic training task executed by the user in the eye plotting book page and outputting the execution result; wherein the correct answer result comprises the color of a movable piece placed at the answer position when the user executes the logic training task in the eye chart book page.
In application, the specific method for obtaining the answer result comprises the following steps: and determining the target movable member at the position of the answer position in the detection result according to the position of the answer position in the detection result and the position of the target movable member, and then determining the color of the target movable member at the position of the answer position in the detection result according to the color of the target movable member, so as to obtain the answer result.
In application, the execution result can be output in a voice broadcast or display mode, if the logic training task is correctly executed, the fourth prompt information used for representing the correct execution of the logic training task is broadcast or displayed in a voice mode, for example, cheerful music, voice, characters, graphics, images or animation and the like which mean 'correct execution'; if the logic training task is not executed correctly, a fourth prompt message for representing that the logic training task is not executed correctly is broadcasted or displayed in a voice mode, for example, a voice, a character, a graph, an image or an animation which represents regretted music and means that the logic training task is not executed correctly, and on the basis, the fourth prompt message can also comprise a voice, a character, a graph, an image or an animation which is used for informing a user of the color of a correct movable element and the placed position of the correct movable element.
As shown in fig. 5, an execution result output by the terminal device is exemplarily shown; the left diagram shows the color of the target movable member 30 and the position of the target answer position 40 in the second area 20 when the logic training task is correctly performed, the right diagram shows the color of the target movable member 30, the position of the target movable member and the position of the target answer position 40, which are recognized by the terminal device, and the position of the target answer position 40 is schematically shown as a dashed frame.
As shown in fig. 6, in one embodiment, step S201 is preceded by steps S601 to S603 as follows:
step S601, when each picture page in at least one picture book is placed in the first area and the positions of the K movable pieces placed in the 2 xK placing positions are constantly changed, obtaining images of the logic board at different shooting angles until a first number of different images of each picture page are obtained;
step S602, labeling the picture book page, the movable part and the answer position in each image of each picture book page respectively;
step S603, inputting the first number of images of each labeled sketch page into a neural network model for training until the neural network model converges, where the target value of the neural network model is the sketch page, the movable element, and the answer position in each image of each labeled sketch page.
In application, when a page of a picture book page is placed in the first area and K movable pieces are placed in K placing positions in advance, images of the logic board at different shooting angles can be obtained; then, when the positions of the K movable pieces placed on the placing positions are changed, the images of the logic board at different shooting angles are obtained again; in this way, the placement position of the movable member is continuously changed to obtain as many different images as possible corresponding to one page of the picture book page, for example, 100 images. The more the number of different images corresponding to one page of the picture book is, the more accurate the detection result of the trained neural network model is. Theoretically, the K movable parts are placed in 2 XK placing positions in common
Figure BDA0002708235660000121
And (4) seed preparation.
In application, after obtaining a plurality of images corresponding to one page of the picture book, a plurality of images corresponding to other pages of the picture book are further obtained by adopting the same method. And then labeling the picture book page, the movable part and the answer position in each image, training the neural network model by using a plurality of images which are labeled and correspond to each picture book page, and training the neural network model to detect the capacity of the labeled picture book page, the movable part and the answer position in each image until the neural network model is converged to obtain the trained neural network model. The method comprises the following steps that a neural network model can be trained by utilizing all images corresponding to all drawing pages of a drawing book to obtain a neural network model corresponding to the drawing book, namely the trained neural network model can be used for detecting the drawing pages, movable pieces and answer positions in the logic board images corresponding to all the drawing pages of the drawing book; the neural network model can be trained by using all images corresponding to all the pages of the at least two scripts to obtain a neural network model corresponding to the at least two scripts, that is, a trained neural network model can be used for detecting the page of the script, the movable part and the answer position in the logic board image corresponding to each page of the at least two scripts.
In one embodiment, step S602 includes:
marking the picture page in each image corresponding to each picture page through the inscribed rectangle of the picture page in each image corresponding to each picture page;
marking the movable piece in each image corresponding to each picture page through the circumscribed rectangle of the movable piece in each image corresponding to each picture page;
marking the answer position in each image corresponding to each picture book page through the circumscribed rectangle of the answer position in each image corresponding to each picture book page; the external rectangle of the answer position is the external rectangle of the movable piece placed in the answer position when the movable piece is placed in the answer position.
In application, the method for labeling the picture page in each image comprises the following steps: and marking the picture page in the image through the inscribed rectangle of the picture page in the image. The method for marking the movable element in each image comprises the following steps: and marking the movable piece in the image through the circumscribed rectangle of the movable piece in the image. The method for labeling the answer position in each image comprises the following steps: assuming that the answer position is provided with a movable piece, the answer position in the image is marked by a movable piece external rectangle placed in the answer position in the image. The position of each marked part can be taken as the central coordinate of each mark, and the central coordinate is the coordinate of the geometric center. Other easy-to-detect methods may be used for labeling, and the labeling method in this embodiment is not particularly limited.
FIG. 7 is a schematic diagram schematically illustrating an image after labeling in the above manner; wherein the labeling box is schematically shown as a dashed box.
As shown in FIG. 8, in one embodiment, step S203 includes the following steps S801-S803:
step S801, identifying the color of the movable part in the detection result;
s802, when the number of the movable pieces with the same color in the detection result is more than 1, obtaining the color and the position of the movable piece with the highest color accuracy in the movable pieces with the same color to obtain the color of a target movable piece and the position of the target movable piece;
step S803, when the number of the movable pieces of the same color in the detection result is 1, obtaining the color and the position of the movable piece of the same color, and obtaining the color of the target movable piece and the position of the target movable piece.
In application, the terminal device may adopt an image recognition method to recognize the color of the movable member in the detection result. In order to further screen out the target movable member from the detected movable members, the accuracy of the color of each movable member having the same color may be calculated, and then the movable member having the highest color accuracy may be taken as the target movable member. The accuracy of the color can be measured by probability, for example, two red movable members are detected, wherein the probability that one movable member is red is A, the probability that the other movable member is red is B, and A < B, the movable member with higher probability of red is used as the target movable member. And when only one movable piece with the same color is in the detection result, the movable piece is the target movable piece.
As shown in FIG. 9, in one embodiment, step S203 is followed by steps S901-S904 as follows:
s901, respectively obtaining the distance between the center coordinate of each target movable piece and the center coordinates of other target movable pieces;
s902, when the distance between the center coordinates of the two target movable pieces is smaller than a distance threshold, reserving one of the two target movable pieces;
s903, respectively calculating the distance between the color value of the center area of each reserved target movable member and the standard color values of the K movable members;
and S904, respectively obtaining the standard color value with the minimum distance from the color value of the central area of each reserved target movable member, and obtaining the color of each reserved target movable member after correction.
In an application, there may be a case where repeated recognition and color recognition are wrong in the recognized target movable member. The repeated recognition means that the same target movable member is recognized as one color and other colors, and the positions are almost the same, for the situation, the distance between the center coordinate of each target movable member and the center coordinates of other target movable members can be respectively obtained, when the distance between the center coordinates of the two target movable members is smaller than a distance threshold value, the two target movable members can be determined to be repeatedly detected, only one of the target movable members is reserved, the other target movable member is deleted, and then the same operation is continuously executed for the remaining target movable members until all the repeatedly recognized target movable members are reserved with only one target movable member. For example, when the distance between the center coordinates of the target movable member a and the target movable member b is smaller than the distance threshold, only a is retained; when the distance between the reserved target movable member a and the center coordinates of the target movable member c is smaller than the distance threshold, only one of a or c is reserved. The center coordinates may specifically be coordinates of a geometric center of the target movable member, and the distance threshold may be set according to actual needs, for example, the distance threshold is equal to a diameter of the target movable member. The unnecessary target movable pieces among the target movable pieces whose colors are repeatedly recognized can be deleted through the steps S901 and S902.
In the application, the color of the target movable member may also be recognized incorrectly, and therefore, the distance between the color value of the central area of each reserved target movable member and the standard color values of the K movable members needs to be further acquired, respectively. The size of the central area can be set to an arbitrary pixel size, for example, 10 × 10 pixels, according to actual needs. The size of the distance between the colors reflects the size of the color difference, and the size of the distance between the colors is positively correlated with the size of the color difference.
In one embodiment, step S903 is preceded by:
respectively obtaining YUV values of all pixel points of the K movable parts in different illumination environments;
respectively obtaining the average U value and the average V value of each movable member in each Y value interval of N sequentially adjacent Y value intervals according to the Y values of all pixel points of each movable member to obtain the standard U value and the standard V value of each movable member in each Y value interval, wherein N is more than or equal to 2;
step S903 includes:
and respectively obtaining the distance between the U value and the V value of the central area of each reserved target movable member and the standard U value and the standard V value of each movable member in the same Y value interval according to the Y value interval in which the Y value of each reserved target movable member is positioned.
In application, Y in the YUV values represents brightness (Luma or Luma), that is, a gray-scale value, U and V represent chromaticity (Chroma or Chroma), the Y value in the YUV values is related to the brightness of the lighting environment, Y values of all pixel points of each movable member are different under different lighting environments, and Y values under the same lighting environment are the same.
In application, after YUV values of all pixel points of each movable member in different illumination environments are obtained, according to a Y value interval where the Y value is located, an average value (namely an average U value) of U values and an average value (namely an average V value) of V values of all pixel points of each Y value interval of the movable member are obtained, the average U value is used as a standard U value, and the average V value is used as a standard V value. The Y value interval may be divided according to actual needs, for example, N is 10, and 10 sequentially adjacent Y value intervals are [40,80 ], [80,100 ], [100,120 ], [120,140 ], [140,160 ], [160,180 ], [180,200 ], [200,220 ], [220,245 ], and [245,255], which are gray scale intervals. After obtaining the standard U value and the standard V value of each movable member in each Y value interval, the distance between the U value and the V value of the center area and the standard U value and the standard V value of each movable member in the same Y value area may be calculated according to the Y value interval where the Y value of the center area of each target movable member is reserved.
In one embodiment, the distance in step S903 is calculated by the formula:
dis=a×abs(U-U std )+b×abs(V-V std );
a+b=1;
where dis represents a distance between a color value of a center region of the any one of the reserved target movable pieces and a standard color value of the any one of the movable pieces, a and b represent weight coefficients, abs () represents an absolute value function, and U represents a value ofA value of U in a central area of any one of the reserved target movable members, V representing a value of V in a central area of any one of the reserved target movable members, U std A standard U value, V value, representing a Y value interval in which a Y value of any one movable member in a center area of said any one reserved target movable member is located std A standard V value representing a Y value interval in which Y values of the any one movable member in a center area of the any one reserved target movable member are located.
In application, the color value of the central area of each reserved target movable member is determined by the method, and then the color of the central area of each reserved target movable member is determined according to the color value of the central area of each reserved target movable member, so that the color of each reserved target movable member after being corrected can be obtained.
In one embodiment, after step S904, comprising:
when the corrected colors of all the target movable parts comprise similar colors, sorting all the colors in the similar colors according to the standard U value of each color in the similar colors to obtain a target sorting result;
when the target sorting result is not matched with an ideal sorting result, re-determining the color of each target movable element corresponding to the similar color according to the ideal sorting result; and the ideal ordering result is obtained by ordering all the colors in the similar colors according to the magnitude of the ideal U value of each color in the similar colors.
In application, after the color of each target movable element is reserved, a color recognition error may still occur. It was concluded through a number of experiments that the case of color recognition errors mainly occurs between similar colors, e.g., purple, red and orange are easily detected as false with each other. For three target movable members, purple, red, and orange in color, respectively, the standard U values of the three target movable members may be compared, and the three target movable parts are sequenced according to the standard U value to obtain a target sequencing result, then comparing with an ideal sorting result obtained by sorting the purple, the red and the orange according to the ideal U values of the purple, the red and the orange, wherein the ideal sorting result is that the purple is more than the red and is more than the orange, if the target sorting result is different from the ideal sorting result, the two are not matched and the colors of the three target movable members need to be re-determined, for example, if the target ordering of the three target movable members with colors purple, red, and orange respectively results in red > orange > purple, then red is actually purple, orange is actually red, and purple is actually orange.
The logic board identification method provided by the embodiment of the invention can be used for identifying whether a user correctly executes a logic training task placed in a picture page of the logic board by replacing the manual utilization of an image identification and machine learning method based on a neural network model through an artificial intelligence technology, can effectively improve the identification efficiency and save manpower, and is particularly suitable for training the logic thinking ability of infants or people with low intelligence under the condition of no accompanying of guardians.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The embodiment of the application also provides a logic board identification device, which is used for executing the steps in the logic board identification method. The logical board identification device may be a virtual appliance (virtual application) in the terminal device, which is executed by a processor of the terminal device, or may be the terminal device itself.
As shown in fig. 10, the logic board recognition apparatus 10 according to the embodiment of the present application includes:
the image detection module 101 is configured to obtain an image of the logic board, input the image into a trained neural network model, detect a sketch page, a movable element, and an answer position in the image, and obtain a detection result output by the trained neural network model; wherein, the answer position is a placing position where the movable piece is positioned when the user correctly executes the logic training task in the page of the drawing book;
the first identification module 102 is configured to identify a target sketch page in the detection result;
a second identification module 103, configured to identify, when the target book page is successfully identified, a target movable member, a color of the target movable member, and a position of the target movable member in the detection result;
and a result output module 104, configured to obtain an execution result of the logic training task executed by the user in the eye chart book page according to the answer bits in the detection result, the color of the target movable member, and the position of the target movable member, and output the execution result.
In one embodiment, the logic board recognition device further includes a starting module, configured to start a voice broadcast function of the terminal device.
In one embodiment, the starting module is further configured to start a logic board identification function of the terminal device.
In one embodiment, the logic board identification apparatus further comprises:
the prompt module is used for outputting first prompt information for indicating that the detection of the chart page fails when the detection result does not comprise the chart page;
the timing module is used for returning to the image detection module after waiting for the first preset time;
a communication module to:
when the detection failure times of the target picture book page reach a first preset time, uploading the image to a server so as to detect the target picture book page in the image through the server;
acquiring a detection result of the server;
and the prompt module is further used for outputting first prompt information indicating that the detection of the drawing page fails and returning the first prompt information to the image detection module when the detection result of the server does not include the drawing page.
In one embodiment, the prompt module is further configured to output a second prompt message indicating that the detection of the target movable element fails when the detection result does not include the movable element;
the timing module is also used for returning to the image detection module after waiting for a second preset time;
the logic board recognition apparatus further includes:
the standby module is used for entering a standby mode when the detection failure times of the target movable part reach a second preset times;
and the timing module is also used for returning to the image detection module after waiting for a third preset time.
In one embodiment, the logic board recognition apparatus further comprises a neural network training module for:
when each picture page in at least one picture book is placed in the first area and the positions of the K movable pieces placed in the 2 xK placing positions are changed constantly, obtaining images of the logic board at different shooting angles until a first number of different images of each picture book page are obtained;
labeling the picture book page, the movable element and the answer position in each image of each picture book page respectively;
and respectively inputting the first number of images of each labeled drawing page into a neural network model for training until the neural network model converges, wherein the target value of the neural network model is the drawing page, the movable part and the answer position in each image of each labeled drawing page.
In one embodiment, the logic board identification apparatus further comprises a first color correction module for:
respectively acquiring the distance between the center coordinate of each target movable member and the center coordinates of other target movable members;
when the distance between the center coordinates of the two target movable members is smaller than a distance threshold, retaining one of the two target movable members;
respectively calculating the distance between the color value of the central area of each reserved target movable member and the standard color values of the K movable members;
and respectively acquiring the standard color value with the minimum distance from the color value of the central area of each reserved target movable member to obtain the color of each reserved target movable member after correction.
In one embodiment, the logic board identification apparatus further comprises a second color correction module for:
when the corrected colors of all the target movable parts comprise similar colors, sorting all the colors in the similar colors according to the standard U value of each color in the similar colors to obtain a target sorting result;
when the target sorting result is not matched with an ideal sorting result, re-determining the color of each target movable element corresponding to the similar color according to the ideal sorting result; and the ideal ordering result is obtained by ordering all the colors in the similar colors according to the magnitude of the ideal U value of each color in the similar colors.
In application, each module in the logic board recognition apparatus may be a software program module, may also be implemented by different logic circuits integrated in a processor, and may also be implemented by a plurality of distributed processors.
As shown in fig. 11, an embodiment of the present application further provides a terminal device 11, including: a camera 110, at least one processor 111 (only one processor is shown in fig. 11), a memory 112, and a computer program 113 stored in the memory 112 and executable on the at least one processor 111, wherein the processor 111 implements the steps in any of the above-mentioned embodiments of the logic board identification method when executing the computer program 113.
In application, the terminal device may be a computing device such as a robot, a desktop computer, a notebook, a palm computer, and a cloud server. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that fig. 11 is merely an example of a terminal device, and does not constitute a limitation of the terminal device, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, etc.
In an Application, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In some embodiments, the storage may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. The specific working process of the modules in the system may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for identifying a logic board according to any of the above embodiments is implemented.
The embodiment of the present application provides a computer program product, which, when running on a terminal device, enables the terminal device to execute the logic board identification method according to any one of the above embodiments.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus and the terminal device are merely illustrative, and for example, the division of the modules is only one logical division, and there may be other divisions when the actual implementation is performed, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for recognizing a logic board is characterized in that the logic board comprises a first area, a second area and K movable parts which are the same in shape and different in color, the first area is used for placing a picture page, 2 xK placing positions are arranged in the second area, each placing position is used for placing one movable part, and the method for recognizing the logic board comprises the following steps:
acquiring an image of the logic board, inputting the image into a trained neural network model to detect a picture book page, a movable part and an answer position in the image, and acquiring a detection result output by the trained neural network model; wherein, the answer position is a placing position where the movable piece is positioned when the user correctly executes the logic training task in the page of the drawing book;
identifying a target picture book page in the detection result;
when the target graphic book page is successfully identified, identifying a target movable member, the color of the target movable member and the position of the target movable member in the detection result;
and acquiring an execution result of the logic training task executed by the user in the target plotting book page according to the answer position in the detection result, the color of the target movable member and the position of the target movable member, and outputting the execution result.
2. The logic board identification method of claim 1, wherein prior to acquiring the image of the logic board, comprising:
when each picture page in at least one picture book is placed in the first area and the positions of the K movable pieces placed in the 2 xK placing positions are changed constantly, obtaining images of the logic board at different shooting angles until a first number of different images of each picture book page are obtained;
labeling the picture book page, the movable element and the answer position in each image of each picture book page respectively;
inputting the first number of images of each labeled drawing page into a neural network model for training until the neural network model converges, wherein the target value of the neural network model is the drawing page, the movable part and the answer position in each image of each drawing page after labeling.
3. The method for recognizing a logic board as claimed in claim 2, wherein said labeling the script page, the movable member and the answer sheet in each image corresponding to each script page respectively comprises:
marking the picture page in each image corresponding to each picture page through the inscribed rectangle of the picture page in each image corresponding to each picture page;
marking the movable piece in each image corresponding to each picture page through the circumscribed rectangle of the movable piece in each image corresponding to each picture page;
marking the answer position in each image corresponding to each picture book page through the circumscribed rectangle of the answer position in each image corresponding to each picture book page; the external rectangle of the answer position is the external rectangle of the movable piece placed in the answer position when the movable piece is placed in the answer position.
4. The logic board recognition method according to claim 1, wherein the recognizing the color of the target movable member and the position of the target movable member in the detection result comprises:
identifying the color of the movable member in the detection result;
when the number of the movable pieces with the same color in the detection result is more than 1, acquiring the color and the position of the movable piece with the maximum color accuracy in the movable pieces with the same color to obtain the color of the target movable piece and the position of the target movable piece;
and when the number of the movable pieces with the same color in the detection result is 1, acquiring the color and the position of the movable piece with the same color to obtain the color of the target movable piece and the position of the target movable piece.
5. The logic board recognition method of claim 1, wherein the obtaining and outputting the execution result of the logic training task executed by the user on the screen page according to the answer bits in the detection result, the color of the target movable member, and the position of the target movable member comprises:
obtaining the color of the target movable member of the answer position placed in the detection result according to the answer position in the detection result, the color of the target movable member and the position of the target movable member to obtain an answer result;
comparing the answer result with a known correct answer result to obtain an execution result of the logic training task executed by the user in the eye plotting book page and outputting the execution result; wherein the correct answer result comprises the color of a movable piece placed at the answer position when the user executes the logic training task in the eye chart book page.
6. The logic board identification method according to any one of claims 1 to 5, wherein after identifying the target movable member, the color of the target movable member, and the position of the target movable member in the detection result, the method comprises:
respectively acquiring the distance between the center coordinate of each target movable member and the center coordinates of other target movable members;
when the distance between the center coordinates of the two target movable members is smaller than a distance threshold, retaining one of the two target movable members;
respectively acquiring the distance between the color value of the central area of each reserved target movable member and the standard color values of the K movable members;
and respectively acquiring the standard color value with the minimum distance from the color value of the central area of each reserved target movable member to obtain the color of each reserved target movable member after correction.
7. The logic board identification method of claim 6, wherein said separately obtaining distances between the color values of the center area of each of the target movable members that are reserved and the standard color values of the K movable members comprises:
respectively obtaining YUV values of all pixel points of the K movable parts in different illumination environments;
respectively obtaining the average U value and the average V value of each movable member in each Y value interval of N sequentially adjacent Y value intervals according to the Y values of all pixel points of each movable member to obtain the standard U value and the standard V value of each movable member in each Y value interval, wherein N is more than or equal to 2;
the respectively obtaining distances between the color values of the center area of each of the reserved target movable members and the standard color values of the K movable members includes:
and respectively obtaining the distance between the U value and the V value of the central area of each reserved target movable member and the standard U value and the standard V value of each movable member in the same Y value interval according to the Y value interval in which the Y value of each reserved target movable member is positioned.
8. The logic board identification method as claimed in claim 6, wherein said obtaining the standard color value with the minimum distance from the color value of the central area of each of the reserved target movable members, respectively, and obtaining the corrected color of each of the reserved target movable members, comprises:
when the corrected colors of all the target movable parts comprise similar colors, sorting all the colors in the similar colors according to the standard U value of each color in the similar colors to obtain a target sorting result;
when the target sorting result is not matched with an ideal sorting result, re-determining the color of each target movable element corresponding to the similar color according to the ideal sorting result; and the ideal ordering result is obtained by ordering all the colors in the similar colors according to the magnitude of the ideal U value of each color in the similar colors.
9. A logic board recognition device, wherein the logic board comprises a first area, a second area and K movable members with the same shape and different colors, the first area is used for placing a picture page, the second area is provided with 2 xK placing positions, each placing position is used for placing one movable member, the logic board recognition device comprises:
the image detection module is used for acquiring the image of the logic board, inputting the image into the trained neural network model to detect the picture book page, the movable part and the answer position in the image and acquiring the detection result output by the trained neural network model; wherein, the answer position is a placing position where the movable piece is positioned when the user correctly executes the logic training task in the page of the drawing book;
the first identification module is used for identifying a target picture book page in the detection result;
the second identification module is used for identifying a target movable member, the color of the target movable member and the position of the target movable member in the detection result when the target plotting book page is successfully identified;
and the result output module is used for acquiring and outputting the execution result of the logic training task executed by the user in the eye plotting book page according to the answer positions in the detection result, the color of the target movable member and the position of the target movable member.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized by further comprising or being in communication with a camera, the processor implementing the steps of the logic board identification method according to any one of claims 1 to 8 when executing the computer program.
CN202011046745.3A 2020-09-29 2020-09-29 Logic board identification method and device and terminal equipment Active CN112201116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011046745.3A CN112201116B (en) 2020-09-29 2020-09-29 Logic board identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011046745.3A CN112201116B (en) 2020-09-29 2020-09-29 Logic board identification method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN112201116A CN112201116A (en) 2021-01-08
CN112201116B true CN112201116B (en) 2022-08-05

Family

ID=74007063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011046745.3A Active CN112201116B (en) 2020-09-29 2020-09-29 Logic board identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112201116B (en)

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3346799B2 (en) * 1992-08-24 2002-11-18 株式会社日立製作所 Sign language interpreter
JP3289304B2 (en) * 1992-03-10 2002-06-04 株式会社日立製作所 Sign language conversion apparatus and method
CN204360637U (en) * 2015-01-13 2015-05-27 晋鼎明 Mind map box
US20180204108A1 (en) * 2017-01-18 2018-07-19 Microsoft Technology Licensing, Llc Automated activity-time training
US10657838B2 (en) * 2017-03-15 2020-05-19 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN108073888A (en) * 2017-08-07 2018-05-25 中国科学院深圳先进技术研究院 A kind of teaching auxiliary and the teaching auxiliary system using this method
CN108009475A (en) * 2017-11-03 2018-05-08 东软集团股份有限公司 Driving behavior analysis method, apparatus, computer-readable recording medium and electronic equipment
CN108665769B (en) * 2018-05-11 2021-04-06 深圳市鹰硕技术有限公司 Network teaching method and device based on convolutional neural network
CN109044351A (en) * 2018-08-03 2018-12-21 周文芳 Logical thinking training system and method based on EEG feedback
CN109118884B (en) * 2018-09-12 2020-05-08 武仪 Teaching device of robot experiment course
CN109166386A (en) * 2018-10-25 2019-01-08 重庆鲁班机器人技术研究院有限公司 Children's logical thinking supplemental training method, apparatus and robot
CN109522835A (en) * 2018-11-13 2019-03-26 北京光年无限科技有限公司 Children's book based on intelligent robot is read and exchange method and system
CN111310775B (en) * 2018-12-11 2023-08-25 Tcl科技集团股份有限公司 Data training method, device, terminal equipment and computer readable storage medium
CN210244762U (en) * 2019-05-09 2020-04-03 北京育益教育科技有限公司 Logical thinking and number teaching toy
CN110689007B (en) * 2019-09-16 2022-04-15 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
US10607084B1 (en) * 2019-10-24 2020-03-31 Capital One Services, Llc Visual inspection support using extended reality
CN111222397B (en) * 2019-10-25 2023-10-13 深圳市优必选科技股份有限公司 Drawing recognition method and device and robot
CN110852131B (en) * 2019-12-10 2024-03-26 艾小本科技(武汉)有限公司 Examination card information acquisition method, system and terminal
CN111191067A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Picture book identification method, terminal device and computer readable storage medium
CN111191558B (en) * 2019-12-25 2024-02-02 深圳市优必选科技股份有限公司 Robot and face recognition teaching method and storage medium thereof
CN111680480A (en) * 2020-04-24 2020-09-18 平安国际智慧城市科技股份有限公司 Template-based job approval method and device, computer equipment and storage medium
CN111695453B (en) * 2020-05-27 2024-02-09 深圳市优必选科技股份有限公司 Drawing recognition method and device and robot

Also Published As

Publication number Publication date
CN112201116A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN109948590B (en) Attitude problem detection method and device
CN102411854B (en) Classroom teaching mixing technology application system based on enhanced reality and method thereof
US11823358B2 (en) Handwritten content removing method and device and storage medium
CN109684980B (en) Automatic scoring method and device
SG191665A1 (en) Image processing system and imaging object used for same
CN111027537B (en) Question searching method and electronic equipment
CN109753968A (en) Generation method, device, equipment and the medium of character recognition model
EP3922232B1 (en) Medicine identification system, medicine identification device, medicine identification method, and program
CN113950822A (en) Virtualization of a physical active surface
US20200211413A1 (en) Method, apparatus and terminal device for constructing parts together
Rose et al. Word recognition incorporating augmented reality for linguistic e-conversion
CN112201116B (en) Logic board identification method and device and terminal equipment
CN111027533B (en) Click-to-read coordinate transformation method, system, terminal equipment and storage medium
CN112201118B (en) Logic board identification method and device and terminal equipment
CN112200230A (en) Training board identification method and device and robot
CN111259888A (en) Image-based information comparison method and device and computer-readable storage medium
CN112201117B (en) Logic board identification method and device and terminal equipment
CN111428721A (en) Method, device and equipment for determining word paraphrases and storage medium
CN116434253A (en) Image processing method, device, equipment, storage medium and product
CN110209242B (en) Button function binding method, button function calling method, button function binding device, button function calling device and projection control equipment
CN113391779A (en) Parameter adjusting method, device and equipment for paper-like screen
CN114882517A (en) Text processing method, device and system
CN108052525B (en) Method and device for acquiring audio information, storage medium and electronic equipment
KR101427820B1 (en) Drawing Type Image Based CAPTCHA Providing System and CAPTCHA Providing Method
CN110941728A (en) Electronic file processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231204

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.

TR01 Transfer of patent right