CN114797084A - Graphic display method, apparatus, device and medium - Google Patents

Graphic display method, apparatus, device and medium Download PDF

Info

Publication number
CN114797084A
CN114797084A CN202110091009.8A CN202110091009A CN114797084A CN 114797084 A CN114797084 A CN 114797084A CN 202110091009 A CN202110091009 A CN 202110091009A CN 114797084 A CN114797084 A CN 114797084A
Authority
CN
China
Prior art keywords
target
real
detected
real object
tile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110091009.8A
Other languages
Chinese (zh)
Inventor
郭冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110091009.8A priority Critical patent/CN114797084A/en
Priority to PCT/CN2021/135688 priority patent/WO2022156389A1/en
Publication of CN114797084A publication Critical patent/CN114797084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/06Patience; Other games for self-amusement
    • A63F9/10Two-dimensional jig-saw puzzles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera

Abstract

The present disclosure relates to a graphic display method, apparatus, device, and medium. The graphic display method comprises the following steps: displaying a target graph; acquiring a real-time image, wherein the real-time image is an image comprising a spliced object to be detected; and under the condition that the target block real object with correct placement is determined to exist in the block real object to be detected, displaying a target filling pattern in a target shape area in the target graph, wherein the target shape area is a relative placement area of the target block real object in the target graph. According to the embodiment of the disclosure, the real-time prompt of the user for placing the correct target tile real object can be realized in the process of the user performing the picture splicing, so that the real-time interaction with the user is realized.

Description

Graphic display method, apparatus, device and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for displaying a graphic.
Background
At present, the puzzle game is often used as an educational teaching aid in various educational games. After the user enters the puzzle game, a puzzle question is given, and the user can puzzle according to the puzzle question. However, in the current puzzle game, various problems exist in answer matching due to non-normative puzzles in the process of puzzle splicing by a user, and the puzzle game does not have an interactive function, so that the playability and interactivity of the puzzle game are low, and the user experience is reduced.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a graphic display method, apparatus, device, and medium.
In a first aspect, the present disclosure provides a method for displaying a graphic, comprising:
displaying a target graph;
acquiring a real-time image, wherein the real-time image is an image comprising a spliced object to be detected;
and under the condition that the target block real object with correct placement is determined to exist in the block real object to be detected, displaying a target filling pattern in a target shape area in the target graph, wherein the target shape area is a relative placement area of the target block real object in the target graph.
In a second aspect, the present disclosure provides a graphic display device comprising:
a first display unit configured to display a target graphic;
the image acquisition unit is configured to acquire a real-time image, wherein the real-time image is an image comprising a to-be-detected spliced real object; and
and the second display unit is configured to display a target filling pattern in a target shape area in the target graph under the condition that the target tile real object with correct layout exists in the tile real object to be detected, wherein the target shape area is a relative layout area of the target tile real object in the target graph.
In a third aspect, the present disclosure provides a graphic display device comprising:
a processor;
a memory for storing executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the graphic display method according to the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the graphical display method of the first aspect.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program which, when executed by a processor, causes the processor to carry out the graphical display method of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has at least the following advantages:
the image display method, the device, the equipment and the medium can display a target image, and acquire a real-time image comprising the to-be-detected jigsaw real object in the process of jigsaw puzzle by a user, and further display a target filling pattern in a target shape area in the target image under the condition that the to-be-detected jigsaw real object has a correctly-placed target jigsaw real object.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a diagram of a graphical display in the related art;
fig. 2 is a schematic flow chart of a graphic display method according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a game interface provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a center of gravity distance provided by an embodiment of the present disclosure;
FIG. 5 is a schematic view of another game interface provided by embodiments of the present disclosure;
fig. 6 is a schematic diagram of an object angle parameter according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of matching puzzle images according to an embodiment of the present disclosure;
FIG. 8 is a schematic illustration of a geometric parameter provided by an embodiment of the present disclosure;
FIG. 9 is a schematic view of yet another game interface provided by an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a graphic display device according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a graphic display device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a view showing a scene of a graphic display in the related art.
As shown in fig. 1, after a user enters a game interface 102 of a puzzle type game through an electronic device 101, a designated graphic 103 is displayed in the game interface 102, the graphic 103 is a puzzle title, and the user can use a puzzle piece 104 to puzzle against the graphic 103.
After the user finishes the jigsaw puzzle, the user can enter an answer display interface of the puzzle game through the electronic device 101, a pre-designed jigsaw answer of the jigsaw question can be displayed in the answer display interface, the user can compare a jigsaw pattern with the jigsaw answer, if the jigsaw pattern is the same as any one of the jigsaw answers, the jigsaw puzzle can be determined to be correct, and then the self-checking of the jigsaw result is realized.
Therefore, in the process of the current puzzle game, the user may have various problems due to the non-standard puzzle, and the puzzle has no interactive function, so that the playability and interactivity of the puzzle game are low, and the user experience is reduced.
In order to solve the above problem, embodiments of the present disclosure provide a method, an apparatus, a device, and a medium for displaying a graph, which can interact with a user in real time.
Next, a description will be given of a graphic display method provided in an embodiment of the present disclosure.
In some embodiments of the present disclosure, a graphical display method may be performed by an electronic device. The electronic device may include a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable electronic device, an all-in-one machine, an intelligent home device, and other devices having a communication function, and may also be a virtual machine or a simulator-simulated device.
Fig. 2 is a flowchart illustrating a graphic display method according to an embodiment of the present disclosure.
As shown in fig. 2, the graphic display method may include the following steps.
And S210, displaying the target graph.
In embodiments of the present disclosure, the electronic device may be displayed with a target graphic against which a user may puzzle using the puzzle pieces of the puzzle.
The chocolate may be any type of chocolate, for example, the target chocolate may be a four-piece, five-piece, seven-piece, thirteen-piece or the like, without limitation.
In the disclosed embodiment, the target graphics can be displayed at any position, and is not limited herein.
And S220, acquiring a real-time image, wherein the real-time image is an image comprising the segment real object to be detected.
In the process of splicing the pictures by the user, the user can acquire the real-time images including the real objects of the tiles to be detected in real time through a camera of the electronic equipment, such as a rear camera, so that the electronic equipment can synchronously acquire the real-time images in real time.
In some embodiments, the electronic device may not display the real-time image after acquiring the real-time image.
In other embodiments, the electronic device may display the real-time image in real-time in synchronization with the target graphic after acquiring the real-time image.
Taking a puzzle game as an example, after a user enters a game interface of the puzzle game through electronic equipment, a designated target graphic is displayed in the game interface, the target graphic is a jigsaw puzzle subject, and the user can use the puzzle blocks of the puzzle to jigsaw the target graphic. In the process of splicing the pictures by the user, the user can acquire the real-time image comprising the real object of the split block to be detected in real time through a camera of the electronic equipment, and further the real-time image can be displayed in the game interface in real time, so that the electronic equipment can simultaneously display the target graph and the real-time image in the game interface in real time.
Optionally, the target graph and the real-time image may be displayed on the electronic device according to a preset display mode. The preset display mode may be preset according to needs, and is not limited herein.
In some embodiments, the preset display mode may be full-screen display of the real-time image, and the target graphic is displayed at any position on the real-time image. For example, the display screen of the electronic device may display the real-time image in full screen, and the target graphic may be displayed superimposed on a corner of the real-time image.
Continuing to take a tangram game as an example, the game interface can display a shooting preview interface in a full screen mode, a real-time image collected by a camera of the electronic equipment in a working state can be displayed in the shooting preview interface in real time, and the target graph can be displayed in a corner of the shooting preview interface in an overlapped mode.
Fig. 3 shows a schematic diagram of a game interface provided by an embodiment of the present disclosure.
As shown in fig. 3, the electronic device 301 may display a game interface 302 of a tangram game, the real-time image 303 may be displayed in a full screen mode in the game interface 302, and the target graphic 304 may be displayed in a superimposed mode in the lower left corner of the real-time image 303.
In other embodiments, the preset display mode may also be split-screen display of the target graph and the real-time image. Specifically, the display screen of the electronic device may be divided into two display areas, one display area having the target graphic displayed therein and one display area having the real-time image displayed therein.
Continuing to take the example of a puzzle game, the game interface can be divided into two display areas, one display area can display the target graph, the other display area can display the shooting preview interface, and the real-time image collected by the camera of the electronic device in the working state can be displayed in the shooting preview interface in real time.
In still other embodiments, the preset display mode may be other display modes, and the disclosure does not limit the specific display mode.
And S230, under the condition that the target block real object with correct placement is determined to exist in the block real object to be detected, displaying a target filling pattern in a target shape area in the target graph, wherein the target shape area is a relative placement area of the target block real object in the target graph.
In the embodiment of the disclosure, the electronic device may detect whether the correctly placed target tile real object exists in the to-be-detected tile real object in real time, and if it is determined that the target tile real object exists in the to-be-detected tile real object, a relative placement area of the target tile real object in the target graph may be used as a target shape area in the target graph, and a target filling pattern is displayed in the target shape area to light the relative placement area of the target tile real object in the target graph.
Alternatively, the target filling pattern may be a filling pattern corresponding to the target shape region.
In some embodiments, the filling pattern corresponding to each shape region in the target pattern is the same, i.e. the target filling pattern is a preset fixed filling pattern.
Alternatively, the filling pattern may include at least one of a solid color pattern and a floral pattern.
With continued reference to FIG. 3, the electronic device 301 can display a first shape region 309, a second shape region 310, a third shape region 311, and a fourth shape region 312 in the target graphic 304 as gray, respectively, upon detecting that the first tile entity 305, the second tile entity 306, the third tile entity 307, and the fourth tile entity 308 are correctly positioned. Wherein the first shape region 309 is the relative placement region of the first tile entity 305 in the target graphic 304, the second shape region 310 is the relative placement region of the second tile entity 306 in the target graphic 304, the third shape region 311 is the relative placement region of the third tile entity 307 in the target graphic 304, and the fourth shape region 312 is the relative placement region of the fourth tile entity 308 in the target graphic 304.
Therefore, in the embodiment of the disclosure, the effect of lighting the relative placing area of the target tile real object in the target graph can be achieved by displaying the preset fixed filling pattern in the target shape area.
In other embodiments, the filling pattern corresponding to each shape region in the target graph may be different, one shape region may correspond to one filling pattern, and the filling pattern corresponding to each shape region may be determined according to the real object pattern of the tile real object corresponding to the shape region. In one example, the fill pattern for a shape region can be a solid color pattern having the physical color of the tile entity to which the shape region corresponds. In another example, the fill pattern corresponding to a shape region can be the physical pattern of the tile entity to which the shape region corresponds.
Therefore, in the embodiment of the disclosure, the filling pattern specified by the target shape area can be displayed in the target shape area, so that the lighting effect of the target graph is further improved while the effect of lighting the relative placing area of the target tile real object in the target graph is achieved.
In the embodiment of the disclosure, a target graph can be displayed, and in the process of jigsaw puzzle by a user, a real-time image including a jigsaw puzzle entity to be detected is obtained, and then, under the condition that a correctly placed target jigsaw puzzle entity exists in the jigsaw puzzle entity to be detected, a target filling pattern is displayed in a target shape area in the target graph.
In one embodiment of the present disclosure, the real-time image may further be an image comprising at least two tile real objects.
In some embodiments, at least two tile entities can each be detected as a tile entity.
In these embodiments, the electronic device can detect that the correctly placed target tile object is among all the tile objects displayed in the real-time image.
In other embodiments, a part of the at least two tile real objects can be used as the tile real object to be detected, and another part of the tile real object can be used as the non-tile real object to be detected. The object to be detected can be a real object to be spliced for carrying out real object detection on the target spliced block, and the object not to be detected can be a real object to be spliced for carrying out real object detection on the target spliced block.
In these embodiments, the segment real objects to be detected can be segment real objects determined according to the gravity center distance between the segment real objects.
Fig. 4 shows a schematic diagram of a center of gravity distance provided by an embodiment of the present disclosure.
As shown in FIG. 4, first tile object 401 and second tile object 402 are both isosceles right triangles, so the centers of gravity of first tile object 401 and second tile object 402 are both hypotenuse midpoints, and the distance between the centers of gravity of first tile object 401 and second tile object 402 can be the length of the line 403 between the hypotenuse midpoints of first tile object 401 and second tile object 402.
In some embodiments of the present disclosure, in order to reliably detect the target tile real object, before determining that the correctly placed target tile real object exists in the tile real object to be detected in S220 shown in fig. 2, the graphic display method may further include:
and determining the real block object to be detected in the real block objects according to the gravity center distance between the real block objects.
In the embodiment of the disclosure, after the real-time image is acquired, the electronic device can determine the gravity center distance between each real block object in the real-time image, and determine the real block object to be detected in the real block object according to the gravity center distance between the real block objects, so as to avoid the problem of inaccurate answer matching caused by non-standardized jigsaw.
Optionally, determining the segment real object to be detected in the segment real objects according to the barycentric distance between the segment real objects may specifically include:
grouping the spliced real objects according to the gravity center distance to obtain at least one spliced real object group, wherein the gravity center distance between the spliced real objects in each spliced real object group is smaller than or equal to a preset distance threshold value;
and taking the segment real object in the segment real object group with the largest number of segment real objects as the segment real object to be detected.
In the embodiment of the present disclosure, a maximum barycentric distance corresponding to the target graph is preset in the electronic device, and the maximum barycentric distance is a preset distance threshold. After determining the gravity center distance between each real block object in the real-time image, the electronic equipment can divide the real block objects of which the gravity center distance is smaller than or equal to a preset distance threshold value into a group to obtain at least one group of real block object groups, and takes the real block objects formed by the real block objects in the group of real block object groups with the largest number of the real block objects as the real block objects to be detected, wherein the real block objects to be detected are the largest communicated subgraphs in the real-time image, and the real block objects in the real block object groups with the largest number of the real block objects are taken as the real block objects to be detected.
It should be noted that the preset distance threshold may be a distance value that can be set according to needs and determine the maximum connected subgraph in the real-time image, which is not limited herein.
FIG. 5 illustrates a schematic diagram of another game interface provided by embodiments of the present disclosure.
As shown in fig. 5, the electronic device 501 may display a game interface 502 of a puzzle game, a real-time image 503 may be displayed in the game interface 502 in a full screen mode, and a target graphic 504 may be displayed in a superimposed mode in a lower left corner of the real-time image 503. Displayed within the real-time image 503 are a first tile real object 505, a second tile real object 506, a third tile real object 507, a fourth tile real object 508, a fifth tile real object 509, and a sixth tile real object 510. The electronic device 501 can cluster the first tile object 505, the second tile object 506, the third tile object 507, the fourth tile object 508, the fifth tile object 509, and the sixth tile object 510 according to the barycentric distance between the tile objects, and group several tile objects whose barycentric distances are all less than or equal to a preset distance threshold. Since the distances of the centers of gravity between the four tile objects, namely the first tile object 505, the second tile object 506, the third tile object 507 and the fourth tile object 508, are all less than or equal to the preset distance threshold value, and the distances of the centers of gravity between the two tile objects, namely the fifth tile object 509 and the sixth tile object 510, are all less than or equal to the preset distance threshold value, and the gravity center distance between at least one of the first tile real object 505, the second tile real object 506, the third tile real object 507 and the fourth tile real object 508 and the fifth tile real object 509 is greater than a preset distance threshold, and the gravity center distance between at least one of the first tile real object 505, the second tile real object 506, the third tile real object 507 and the fourth tile real object 508 and the sixth tile real object 510 is greater than a preset distance threshold, so that 6 tile real objects can be divided into two tile real object groups. The number of the tile objects in the tile object group of the first tile object 505, the second tile object 506, the third tile object 507, and the fourth tile object 508 is greater than the number of the tile objects in the tile object group of the fifth tile object 509 and the sixth tile object 510, so that the first tile object 505, the second tile object 506, the third tile object 507, and the fourth tile object 508 can be used as the tile object to be detected, and the fifth tile object 509 and the sixth tile object 510 can be used as the non-tile object to be detected.
Therefore, in the embodiment of the disclosure, the electronic device can accurately find the to-be-detected real object to be detected in the real-time image, and then detect the target real object to be detected in the to-be-detected real object, so that not only can the detection efficiency of the target real object to be spliced be improved, but also the reliability of the detection result of the target real object to be spliced be improved.
In another embodiment of the present disclosure, the determining that the correctly placed target tile entity exists in the tile entity to be detected in S220 shown in fig. 2 may specifically include:
detecting a target splicing block real object in the splicing block real object to be detected according to the gravity center distance between the splicing block real objects to be detected and the real object angle parameter of the splicing block real object to be detected;
and under the condition that the target segment real object is detected, determining that the target segment real object exists in the to-be-detected segment real object.
In the embodiment of the disclosure, after the real-time image is acquired, the electronic device may determine a gravity center distance between each real object of the tiles to be detected and a real object angle parameter of each real object of the tiles to be detected in the real-time image, and detect a target real object of the tiles to be detected in the real object of the tiles to be detected according to the gravity center distance and the real object angle parameter, if the target real object of the tiles to be detected is detected, it is determined that the target real object of the tiles to be detected exists in the real object of the tiles to be detected, otherwise, it is determined that the target real object of the tiles does not exist in the real object of the tiles to be detected, and the problem of inaccurate answer matching caused by the lack of jigsaw puzzle matching is avoided.
The object angle parameter of the to-be-detected segment object can include at least one of an object angle value of the to-be-detected segment object and an object angle vector corresponding to the to-be-detected segment object.
In the embodiment of the disclosure, each real object to be detected is preset with a preset number of vertexes. The preset number can be the maximum number of the corner points of each tile real object in the puzzle.
Taking the chocolate as an example, the chocolate can be composed of 7 pieces, and one piece corresponds to a real object pattern. The 7 tiles may be two first triangular tiles, one second triangular tile, two third triangular tiles, one square tile, and one parallelogram tile, respectively. The two first triangular tiles are tiles with the same attribute, and the two third triangular tiles are tiles with the same attribute. The area of the first triangular splicing block, the area of the second triangular splicing block and the area of the third triangular splicing block are reduced in sequence. The second triangular tiles, the square tiles and the parallelogram tiles have the same tile area. It can be seen that the maximum number of corner points of each tile real object in the jigsaw puzzle is 4, and thus the preset number may be 4.
In some embodiments, if the preset number is greater than the number of corner points of the tile entity itself, vertices exceeding the number of corner points may be randomly allocated to each edge of the tile entity, and the vertices on each edge are uniformly arranged.
In other embodiments, if the preset number is greater than the number of corner points of the tile entity, vertices exceeding the number of corner points may be distributed to each edge of the tile entity according to a preset distribution rule, and the vertices on each edge are uniformly arranged. The preset allocation rule may be preset according to needs, and is not limited herein.
Continuing with the example of the puzzle as a jigsaw puzzle, each corner point of the triangular segments can be respectively used as a vertex, and the fourth vertex can be disposed at the midpoint of the hypotenuse of the triangular segments.
The sequence of the initial vertex and the preset vertex of each real object to be detected can be preset according to needs, and is not limited herein. For example, the preset vertex order may be a clockwise order or a counterclockwise order.
Optionally, the real object angle value may be an angle value of a first included angle between a connecting line between a first target vertex and a second target vertex in the real object of the segment to be detected and the positive horizontal direction.
The positive horizontal direction may be preset as needed, and is not limited herein.
Further, the first target vertex may be a vertex with a first number in the real object of the segment to be detected, and the second target vertex may be a vertex with a second number in the real object of the segment to be detected.
The first number and the second number can be preset according to needs, and the first number is different from the second number.
In the embodiment of the disclosure, the object angle vector corresponding to the object to be detected can be generated according to the relative angle between the object to be detected and each of the other object to be detected.
Specifically, the relative angle between every two real object to be detected tiles may be an angle value of a second included angle between connecting lines between the first target vertex and the second target vertex of the two real object to be detected tiles.
Fig. 6 shows a schematic diagram of a physical angle parameter provided by an embodiment of the present disclosure.
As shown in FIG. 6, a first angle 605 between a first connection 603 between the first vertex and the third vertex of the first tile object 601 and a positive horizontal direction, e.g., to the right along horizontal line 608 in the figure, is the object angle value of the first tile object 601. A second angle 606 between a second connection 604 between the first vertex and the third vertex of second tile object 602 and the positive horizontal direction, e.g., to the right along horizontal line 608 in the figure, is the object angle value for second tile object 602. In the drawings, the arrow direction indicates the line direction.
With continued reference to FIG. 6, a third angle 607 between first line 603 and second line 604 is the relative angle between first tile object 601 and second tile object 602.
Further, the gravity center distance between the real objects of the segments to be detected has been described above, and is not described herein.
In the embodiment of the present disclosure, optionally, detecting the target tile real object in the to-be-detected tile real object may specifically include, according to the barycentric distance between the to-be-detected tile real objects and the real object angle parameter of the to-be-detected tile real object:
identifying jigsaw patterns to be matched in reference jigsaw patterns corresponding to the target patterns according to the gravity center distance, wherein each jigsaw pattern to be matched comprises a plurality of jigsaw patterns to be matched, and one jigsaw pattern to be matched in each jigsaw pattern to be matched corresponds to a real object of the jigsaw to be detected;
and detecting the target splicing block real object of which the angle parameter of the real object and the corresponding splicing block graph to be matched meet the preset angle consistency condition in the splicing block real object to be detected.
In the embodiment of the present disclosure, at least one reference jigsaw puzzle corresponding to the target puzzle may be preset in the electronic device, and the reference jigsaw puzzle is a jigsaw answer corresponding to the target puzzle.
The electronic equipment can firstly identify the jigsaw to be matched with the jigsaw to be detected with the gravity center distance similar to each real object of the jigsaw to be detected in the reference jigsaw pattern corresponding to the target pattern, and then detect the target real object of the jigsaw to be matched with the jigsaw pattern with the same angle parameter as the jigsaw attribute included in the jigsaw to be matched and meeting the preset angle consistency condition in the real object of the jigsaw to be detected, wherein the target real object of the jigsaw is the real object of the jigsaw correctly placed jigsaw, therefore, the target real object of the jigsaw can be determined to exist in the real object of the jigsaw to be detected under the condition that the target real object of the jigsaw is detected.
Optionally, identifying, according to the barycentric distance, a to-be-matched tile pattern in the reference tile pattern corresponding to the target pattern may specifically include:
generating a physical adjacency matrix corresponding to the gravity center distance;
respectively carrying out bitwise calculation on the reference adjacent matrix corresponding to each reference jigsaw puzzle graph and the real object adjacent matrix to obtain a calculation result corresponding to each reference adjacent matrix;
and taking the reference mosaic image corresponding to the reference adjacent matrix with the maximum calculation result as the mosaic image to be matched.
In this disclosure, the electronic device may preset a preset adjacent distance corresponding to each segment real object. The electronic equipment can compare the gravity center distance between every two real objects of the segment to be detected with the preset neighbor distance, if the gravity center distance is greater than the preset neighbor distance, the adjacent value of the gravity center distance is set to be 0, otherwise, the adjacent value of the gravity center distance is set to be 1. After the electronic device obtains the adjacency values corresponding to the gravity center distances between all the real objects of the segment to be detected, the electronic device can arrange the adjacency values according to the preset segment sequence corresponding to each real object of the segment to be detected, and generate a real object adjacency matrix corresponding to the gravity center distances.
After the real-object adjacent matrix is generated, the electronic device can obtain a reference adjacent matrix corresponding to each reference jigsaw figure, and perform bitwise and calculation on the reference adjacent matrix corresponding to each reference jigsaw figure and the real-object adjacent matrix respectively to obtain a calculation result corresponding to each reference adjacent matrix, and further take the reference jigsaw figure corresponding to the reference adjacent matrix with the maximum calculation result as the jigsaw figure to be matched.
The reference angle vector corresponding to the tile pattern to be matched may be generated before S210 shown in fig. 2, or may be generated before determining that the target tile real object with the correct layout exists in the tile real object to be detected in S220, which is not limited herein. And the generation method of the reference adjacency matrix is similar to that of the real object adjacency matrix, and is not described herein.
Optionally, the number of the puzzle images to be matched may be at least one, that is, the number of the puzzle images to be matched may be 1, or may be multiple, which is not limited herein.
Fig. 7 is a schematic diagram illustrating a matching puzzle image according to an embodiment of the present disclosure.
As shown in fig. 7, the barycentric distance between each of the first to-be-matched mosaic image 705 and the second to-be-matched mosaic image 706 is most similar to the barycentric distance between the first mosaic real object 701, the second mosaic real object 702, the third mosaic real object 703 and the fourth mosaic real object 704, so that the first mosaic real object 701, the second mosaic real object 702, the third mosaic real object 703 and the fourth mosaic real object 704 are respectively at the same position as the mosaic images to be matched with the same attributes in the first to-be-matched mosaic image 705 and the second to-be-matched mosaic image 706, and therefore, the mosaic real object to be detected obtained by splicing the first to-be-matched mosaic real object 701, the second mosaic real object 702, the third mosaic real object 703 and the fourth mosaic real object 704 can have two mosaic images to be matched, namely the first to-be-matched mosaic image 705 and the second to-be-matched mosaic image 706.
Further, the angle consistency condition may include that a weighted sum of the target parameter difference values is less than or equal to a target weighted sum threshold, the target parameter difference value is a parameter difference value between the real object angle parameter and the reference angle parameter corresponding to the tile pattern to be matched, and the target weighted sum threshold is a preset weighted sum threshold corresponding to the real object angle parameter.
In some embodiments, in the case that the object angle parameter of the tile object to be detected includes the object angle value of the tile object to be detected, the reference angle parameter of the tile pattern to be matched may include the reference angle value of the tile pattern to be matched.
The reference angle value of the tile pattern to be matched may be determined before S210 shown in fig. 2, or may be determined before determining that the target tile real object with the correct layout exists in the tile real object to be detected in S220, which is not limited herein. And the method for determining the reference angle value of the mosaic to be matched is similar to the method for determining the object angle value of the object to be detected, and is not repeated herein.
In these embodiments, optionally, the angle consistency condition may include that a weighted sum of angle differences between the real object angle value of the to-be-detected tile real object and the reference angle value of the to-be-matched tile image is less than or equal to a preset weighted sum threshold corresponding to the angle value.
In other embodiments, in the case that the object angle parameter of the to-be-matched tile object includes the object angle vector corresponding to the to-be-matched tile object, the reference angle parameter of the to-be-matched tile image may include the reference angle vector corresponding to the to-be-matched tile image.
The reference angle vector corresponding to the tile pattern to be matched may be determined before S210 shown in fig. 2, or may be determined before determining that the target tile real object with the correct layout exists in the tile real object to be detected in S220, which is not limited herein. And the determination method of the reference angle vector corresponding to the image of the segment to be matched is similar to the determination method of the object angle vector corresponding to the object of the segment to be detected, and is not repeated herein.
In these embodiments, optionally, the angle consistency condition may include that a weighted sum of vector difference values between an actual object angle vector corresponding to the tile object to be detected and a reference angle vector corresponding to the tile image to be matched is less than or equal to a preset weighted sum threshold corresponding to the angle vector.
In still other embodiments, in the case that the object angle parameter of the tile object to be detected includes the object angle value of the tile object to be detected and the object angle vector corresponding to the tile object to be detected, the reference angle parameter of the tile image to be matched may include the reference angle value of the tile image to be matched and the reference angle vector corresponding to the tile image to be matched.
In these embodiments, the angle consistency condition may optionally include that the weighted sum of the angle difference value and the vector difference value is less than or equal to a preset weighted sum threshold corresponding to the angle difference value and the vector difference value.
It should be noted that, in the embodiment of the present disclosure, the weighting coefficients of the angle parameters may be preset according to needs, and each weighting coefficient is any value in (0, 1), which is not limited herein.
Therefore, in the embodiment of the disclosure, the target tile real object matched with the to-be-matched tile image in the to-be-matched tile image can be accurately detected by using the angle consistency condition, that is, the correctly placed target tile real object can be accurately detected.
In another embodiment of the present disclosure, in order to accurately obtain the barycentric distance between the segment real objects and the real object angle parameter of each segment real object, before determining that the target segment real object correctly placed exists in the segment real object to be detected in S220 shown in fig. 2, the graphic display method may further include:
performing geometric analysis on the real-time image to obtain the vertex position of each splicing block real object;
and calculating the gravity center distance between the real object blocks and the real object angle parameters of the real object blocks according to the vertex positions.
In the embodiment of the disclosure, after obtaining the real-time image, the electronic device may perform geometric analysis on the real-time image to obtain a vertex position, such as a vertex coordinate, of each segment real object in the real-time image, then, for each segment real object, calculate a gravity center position, such as a gravity center coordinate and a real object angle value, of the segment real object by using the vertex position of the segment real object, and further obtain a gravity center distance between each segment real object and a real object angle parameter of each segment real object by using the gravity center position and the real object angle value of each segment real object.
In the embodiment of the disclosure, each segment real object is preset with a preset number of vertexes. The preset number can be the maximum number of the corner points of each tile real object in the puzzle.
Taking the chocolate as an example, the chocolate can be composed of 7 pieces, and one piece corresponds to a real object pattern. The 7 tiles may be two first triangular tiles, one second triangular tile, two third triangular tiles, one square tile, and one parallelogram tile, respectively. The two first triangular tiles are tiles with the same attribute, and the two third triangular tiles are tiles with the same attribute. The area of the first triangular splicing block, the area of the second triangular splicing block and the area of the third triangular splicing block are reduced in sequence. The second triangular tiles, the square tiles and the parallelogram tiles have the same tile area. It can be seen that the maximum number of corner points of each tile real object in the jigsaw puzzle is 4, and thus the preset number may be 4.
In some embodiments, if the preset number is greater than the number of corner points of the tile entity itself, vertices exceeding the number of corner points may be randomly allocated to each edge of the tile entity, and the vertices on each edge are uniformly arranged.
In other embodiments, if the preset number is greater than the number of corner points of the tile entity, vertices exceeding the number of corner points may be distributed to each edge of the tile entity according to a preset distribution rule, and the vertices on each edge are uniformly arranged. The preset allocation rule may be preset according to needs, and is not limited herein.
Continuing with the example of the puzzle as a jigsaw puzzle, each corner point of the triangular segments can be respectively used as a vertex, and the fourth vertex can be disposed at the midpoint of the hypotenuse of the triangular segments.
In the embodiment of the present disclosure, the vertices of each tile entity may be numbered in a preset vertex order starting from a starting vertex.
The starting vertex and the preset vertex sequence of each segment real object can be preset according to needs, and are not limited herein. For example, the preset vertex order may be a clockwise order or a counterclockwise order.
Specifically, after obtaining the vertex positions of the tile real objects, the electronic device may store the vertex positions of the tile real objects in the order of numbers.
Alternatively, the real object angle value may be an angle value of a first included angle between a connecting line between the first target vertex and the second target vertex in the tile real object and the positive horizontal direction. The positive horizontal direction may be preset as needed, and is not limited herein.
Further, the first target vertex may be a vertex having a first number in the tile entity, and the second target vertex may be a vertex having a second number in the tile entity. The first number and the second number can be preset according to needs, and the first number is different from the second number.
Continuing with the example of the puzzle as a seven-piece puzzle, the first number can be 1 and the second number can be 3, i.e. the physical angle value can be the angle value of the angle between the line connecting the first vertex and the third vertex of the puzzle piece and the positive horizontal direction.
Fig. 8 shows a schematic diagram of a geometric parameter provided by an embodiment of the present disclosure.
As shown in fig. 8, the electronic device may detect vertex positions of vertices (shown by dots in the figure) of the first tile object 801, the second tile object 802, the third tile object 803, and the fourth tile object 804 in the real-time image, for example, vertex 805 is a vertex of the fourth tile object 804, and vertex position 805 is a vertex position of the fourth tile object 804, and determine a connection line (shown by dashed arrows in the figure) between the first vertex and the third vertex in the first tile object 801, the second tile object 802, the third tile object 803, and the fourth tile object 804, for example, connection line 806 is a connection line between the first vertex and the third vertex in the second tile object 802.
With continued reference to FIG. 8, the positive horizontal direction can be to the right along the horizontal line 807, and thus, taking the second tile object 802 as an example, the object angle value of the second tile object 802 can be the angle value of the angle 808 between the connecting line 806 and the direction to the right along the horizontal line 807.
In this embodiment of the present disclosure, optionally, the specific process of geometrically resolving the real-time image may include: inputting the real-time image into an image segmentation model obtained by pre-training to obtain a plurality of image segmentation areas, wherein one image segmentation area corresponds to one splicing block attribute, and each splicing block attribute corresponds to one splicing block shape, one splicing block area and a real object pattern of a splicing block real object. Then, for each image segmentation region, the image segmentation region is input into a vertex detection model obtained by pre-training of the corresponding patch attribute, and the vertex position of a patch real object of the patch attribute corresponding to the image segmentation region is obtained. Therefore, in the embodiment of the disclosure, the electronic device can utilize the image segmentation model and the vertex detection model obtained by pre-training to realize the geometric analysis of the real-time image, and the efficiency and the accuracy of the geometric analysis are improved. It should be understood by those skilled in the art that the present disclosure is not limited to specific geometry-resolving processes and methods.
In an embodiment of the present disclosure, for each tile entity, the electronic device may calculate a vertex position mean value of each vertex position of the tile entity, and use the vertex position mean value as a barycentric position of the tile entity. Taking the vertex position as the vertex coordinate, the electronic device may calculate a vertex coordinate mean value of the vertex coordinates of each tile real object, and then use the vertex coordinate mean value as the barycentric coordinate of the tile real object.
In this embodiment, for each segment real object, the electronic device may calculate, based on a vertex position corresponding to a first target vertex and a vertex position corresponding to a second target vertex of the segment real object, a first included angle between a connecting line between the first target vertex and the second target vertex of the segment real object and the positive horizontal direction, and use the first included angle as a real object angle value of the segment real object.
Further, the electronic device may calculate a first difference between the barycentric positions of every two segment real objects according to the barycentric position of each segment real object, and then use an absolute value of the first difference as the barycentric distance between the two segment real objects, thereby obtaining the barycentric distance between each segment real object.
Further, under the condition that the object angle parameter does not include the object angle vector corresponding to the segment object, the electronic device may directly use the object angle value of each segment object as the object angle parameter of the segment object.
Further, under the condition that the object angle parameter includes the object angle vector corresponding to the real object of the segment, the electronic device may further calculate a second difference between the object angle values of every two real objects of the segment according to the object angle values of the real objects of the segment, and further take an absolute value of the second difference as a relative angle between the two real objects of the segment, where the relative angle is an angle value of a second included angle between a connection line between a first target vertex and a second target vertex of the two real objects of the segment, so that the relative angle between the real object of each segment and each of the other real objects of the segment may be arranged according to a preset segment sequence corresponding to the real object of each segment, and the object angle vector corresponding to the real object of each segment is generated. If the object angle parameter does not include the object angle value of the segment object, the electronic device can use the object angle vector corresponding to each segment object as the object angle parameter of the segment object, otherwise, the electronic device can use the object angle value of each segment object and the object angle vector corresponding to the segment object as the object angle parameter of the segment object.
Therefore, in the embodiment of the disclosure, the electronic device can accurately and efficiently calculate the geometric parameters, such as the gravity center distance, the object angle value, the object angle vector, and the like, corresponding to each segment real object by using the vertex position, so as to improve the reliability of the detected target segment real object, and reliably perform answer matching on the real-time image.
In still another embodiment of the present disclosure, in order to further improve interactivity of the chocolate game, after S220 shown in fig. 2, the graphic display method may further include:
and displaying target prompt information under the condition that the target graph is completely filled, wherein the target prompt information is used for prompting that all the spliced block real objects to be detected are correctly spliced.
In the embodiment of the disclosure, after the electronic device displays the target filling pattern in the target shape region, it may detect whether the target pattern is completely filled in real time, and if it is determined that the target pattern is completely filled, may display target prompt information for prompting that all the segment real objects to be detected are correctly spliced.
Optionally, the target prompt information may include at least one of prompt text and a prompt identification.
For example, the target prompt may be the prompt text "Gong you closed! ".
FIG. 9 is a schematic diagram illustrating yet another game interface provided by embodiments of the present disclosure.
As shown in fig. 9, the electronic device 901 may display a game interface 902 of a tangram game, the real-time image 903 may be displayed in full screen in the game interface 902, and the lower left corner of the real-time image 903 may be displayed with the target graphic 904 superimposed. When the electronic device detects that the target graphic 904 is completely filled, a prompt text "Congestion closed!may be displayed superimposed over the real-time image 903! "to prompt the user that the mosaic pattern in the real-time image 903 is correctly stitched.
In the embodiment of the present disclosure, optionally, before displaying the target prompt message, the electronic device further needs to detect whether the target graphic is completely filled.
In some embodiments, the total number of tiles of the puzzle used to tile the target graphic may be pre-set within the electronic device. The electronic equipment can acquire the number of the real objects of the target tiles correctly placed, judge whether the number of the real objects is the same as the preset total number of the tiles, and if the number of the real objects is the same as the preset total number of the tiles, determine that the target graph is completely filled, otherwise, determine that the target graph is not completely filled.
In other embodiments, the total number of shape regions of the target graphic may be preset in the electronic device. The electronic device may acquire the number of shape regions of the filled target shape region, and determine whether the number of shape regions is the same as the preset total number of shape regions, and if so, may determine that the target pattern is completely filled, otherwise, may determine that the target pattern is not completely filled.
In still other embodiments, the electronic device may perform image recognition on the target graphic to detect whether an initial fill color of the target graphic exists in the target graphic, and if the initial fill color is detected to exist, it may be determined that the target graphic is not completely filled, otherwise, it may be determined that the target graphic is completely filled.
Therefore, in the embodiment of the disclosure, the electronic device can quickly and accurately detect whether the target pattern is completely filled, and further improve the reliability of the detection result of whether all the split real objects to be detected are correctly spliced.
The disclosed embodiment also provides a graphic display device, which will be described below with reference to fig. 10.
In some embodiments of the present disclosure, the graphical display may be an electronic device, such as the electronic device 110 shown in fig. 1. The electronic device can be a mobile phone, a tablet computer, a desktop computer, a notebook computer, a vehicle-mounted terminal, a wearable device, an all-in-one machine, an intelligent household device and other devices with communication functions, and can also be a virtual machine or simulator simulation device.
Fig. 10 is a schematic structural diagram of a graphic display device according to an embodiment of the present disclosure.
As shown in fig. 10, the graphic display apparatus 1000 may include a first display unit 1010, an image acquisition unit 1020, and a second display unit 1030.
The first display unit 1010 may be configured to display a target graphic.
The image obtaining unit 1020 may be configured to obtain a real-time image, which is an image including a segment real object to be detected.
The second display unit 1030 can be configured to display a target filling pattern in a target shape area in the target graph under the condition that it is determined that the target tile real object with the correct layout exists in the tile real object to be detected, wherein the target shape area is a relative layout area of the target tile real object in the target graph.
In the embodiment of the disclosure, a target graph can be displayed in the process of jigsaw puzzle by a user, and a real-time image including a to-be-detected jigsaw real object is acquired in the process of jigsaw puzzle by the user, so that under the condition that a correctly-placed target jigsaw real object exists in the to-be-detected jigsaw real object, a target filling pattern is displayed in a target shape area in the target graph, and because the target shape area is a relative placement area of the target jigsaw real object in the target graph, a jigsaw puzzle game can automatically and accurately perform answer matching, and can also prompt the user to place the correct target jigsaw real object in real time, so that real-time interaction with the user is realized, the playability and the interactivity of the jigsaw puzzle game are improved, and further the experience of the user is improved.
In some embodiments of the present disclosure, the real-time image may also be an image including at least two tile objects.
Accordingly, the graphic display device 1000 may further include a first processing unit, and the first processing unit may be configured to determine the segment real object to be detected in the segment real objects according to the gravity center distance between the segment real objects.
In some embodiments of the present disclosure, the first processing unit may include a first sub-processing unit and a second sub-processing unit.
The first sub-processing unit can be configured to group the segment real objects according to the gravity center distance to obtain at least one segment real object group, and the gravity center distance between each segment real object in each segment real object group is smaller than or equal to a preset distance threshold.
The second sub-processing unit may be configured to use the segment real object in the segment real object group with the largest number of segment real objects as the segment real object to be detected.
In some embodiments of the present disclosure, the graphic display device 1000 may further include a second processing unit and a third processing unit.
The second processing unit can be configured to detect the target segment real object in the segment real object to be detected according to the gravity center distance between the segment real objects to be detected and the real object angle parameter of the segment real object to be detected.
The third processing unit may be configured to determine that the target tile real object exists in the to-be-detected tile real object in case that the target tile real object is detected.
In some embodiments of the present disclosure, the second processing unit may include a third sub-processing unit and a fourth sub-processing unit.
The third sub-processing unit may be configured to identify, according to the gravity center distance, a to-be-matched jigsaw pattern in a reference jigsaw pattern corresponding to the target pattern, where each to-be-matched jigsaw pattern includes a plurality of to-be-matched jigsaw patterns, and one to-be-matched jigsaw pattern in each to-be-matched jigsaw pattern corresponds to one to-be-detected jigsaw real object.
The fourth sub-processing unit can be configured to detect a target segment real object of which the angle parameter and the corresponding segment graph to be matched meet a preset angle consistency condition in the segment real object to be detected.
In some embodiments of the present disclosure, the third sub-processing unit may be further configured to generate a physical adjacency matrix corresponding to the barycentric distance; respectively carrying out bitwise calculation on the reference adjacent matrix corresponding to each reference jigsaw puzzle image and the real object adjacent matrix to obtain a calculation result corresponding to each reference adjacent matrix; and taking the reference mosaic image corresponding to the reference adjacent matrix with the maximum calculation result as the mosaic image to be matched.
In some embodiments of the present disclosure, the object angle parameter of the to-be-detected tile object may include at least one of an object angle value of the to-be-detected tile object and an object angle vector corresponding to the to-be-detected tile object, and the object angle vector corresponding to the to-be-detected tile object may be generated according to a relative angle between the to-be-detected tile object and each of the other to-be-detected tile objects.
In some embodiments of the present disclosure, the angle consistency condition may include that a weighted sum of target parameter differences is less than or equal to a target weighted sum threshold, the target parameter difference may be a parameter difference between the real object angle parameter and a reference angle parameter corresponding to the tile image to be matched, and the target weighted sum threshold may be a preset weighted sum threshold corresponding to the real object angle parameter.
In some embodiments of the present disclosure, the graphic display device 1000 may further include a geometric parsing unit and a parameter calculation unit.
The geometric analysis unit can be configured to perform geometric analysis on the real-time image to obtain the vertex position of each segment real object.
The parameter calculation unit may be configured to calculate a barycentric distance between the patch real objects and a real object angle parameter of the patch real objects from the vertex positions.
In some embodiments of the present disclosure, the graphic display device 1000 may further include a third display unit, where the third display unit may be configured to display target prompt information in a case that the target graphic is completely filled, and the target prompt information may be used to prompt that all the segment real objects to be detected are correctly spliced.
It should be noted that the graphic display device 1000 shown in fig. 10 may perform each step in the method embodiments shown in fig. 2 to 9, and implement each process and effect in the method embodiments shown in fig. 2 to 9, which are not described herein again.
Embodiments of the present disclosure also provide a graphics display device that may include a processor and a memory, which may be used to store executable instructions. The processor may be configured to read the executable instructions from the memory and execute the executable instructions to implement the graphic display method in the above embodiments.
Fig. 11 shows a schematic structural diagram of a graphic display device provided by an embodiment of the present disclosure. Referring now specifically to FIG. 11, a schematic diagram of a graphical display device 1100 suitable for use in implementing embodiments of the present disclosure is shown.
The graphic display device 1100 in the embodiments of the present disclosure may be an electronic device. The electronic devices may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like.
In some embodiments, the graphical display device 1100 may be the electronic device 110 shown in FIG. 1.
It should be noted that the graphic display device 1100 shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 11, the graphic display device 1100 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 1101 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1102 or a program loaded from a storage means 1108 into a Random Access Memory (RAM)1103 to implement the graphic display method in the above-described embodiments. In the RAM 1103, various programs and data necessary for the operation of the information processing apparatus 1100 are also stored. The processing device 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
Generally, the following devices may be connected to the I/O interface 1105: input devices 1106 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 1107 including, for example, Liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices 1108, including, for example, magnetic tape, hard disk, etc.; and a communication device 1109. The communication means 1109 may allow the graphic display device 1100 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 11 illustrates a graphics display apparatus 1100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
The embodiments of the present disclosure also provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor is enabled to implement the graphic display method in the above embodiments.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs.
The embodiments of the present disclosure also provide a computer program product, which may include a computer program that, when executed by a processor, causes the processor to implement the graphic display method in the above-described embodiments.
For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication device 1109, or installed from the storage device 1108, or installed from the ROM 1102. The computer program, when executed by the processing device 1101, performs the above-described functions defined in the graphic display method of the embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP, and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the graphic display device; or may exist separately without being assembled into the graphic display device.
The computer readable medium carries one or more programs which, when executed by the graphics display device, cause the graphics display device to perform:
displaying a target graph; acquiring a real-time image, wherein the real-time image is an image comprising a spliced object to be detected; and under the condition that the target block real object with correct placement is determined to exist in the block real object to be detected, displaying a target filling pattern in a target shape area in the target graph, wherein the target shape area is a relative placement area of the target block real object in the target graph.
In embodiments of the present disclosure, computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. A graphical display method, comprising:
displaying a target graph;
acquiring a real-time image, wherein the real-time image is an image comprising a to-be-detected spliced real object;
and under the condition that the target segment real object with correct placement is determined to exist in the segment real object to be detected, displaying a target filling pattern in a target shape area in the target graph, wherein the target shape area is a relative placement area of the target segment real object in the target graph.
2. The method of claim 1, wherein the real-time image is an image comprising at least two tiled real objects;
before determining that a target segment real object with correct placement exists in the segment real objects to be detected, the method further comprises the following steps:
and determining the segment real objects to be detected in the segment real objects according to the gravity center distance between the segment real objects.
3. The method of claim 2, wherein the determining the segment real objects to be detected in the segment real objects according to the gravity center distance between the segment real objects comprises:
grouping the segment real objects according to the gravity center distance to obtain at least one segment real object group, wherein the gravity center distance between each segment real object in each segment real object group is smaller than or equal to a preset distance threshold value;
and taking the segment real object in the segment real object group with the largest number of segment real objects as the segment real object to be detected.
4. The method of claim 1, wherein the determining that the target tile entity with the correct layout exists in the object to be detected comprises:
detecting the target block real object in the block real object to be detected according to the gravity center distance between the block real objects to be detected and the real object angle parameter of the block real object to be detected;
and under the condition that the target segment real object is detected, determining that the target segment real object exists in the to-be-detected segment real object.
5. The method according to claim 4, wherein the detecting the target tile real object in the tile real object to be detected according to the gravity center distance between the tile real objects to be detected and the real object angle parameter of the tile real object to be detected comprises:
identifying jigsaw patterns to be matched in reference jigsaw patterns corresponding to the target patterns according to the gravity center distance, wherein each jigsaw pattern to be matched comprises a plurality of jigsaw patterns to be matched, and one jigsaw pattern to be matched in each jigsaw pattern to be matched corresponds to a real object to be spliced;
and detecting the target segment real object of which the angle parameter and the corresponding segment graph to be matched meet the preset angle consistency condition in the segment real object to be detected.
6. The method as claimed in claim 5, wherein the identifying a jigsaw pattern to be matched from among the reference jigsaw patterns corresponding to the target pattern according to the barycentric distance comprises:
generating a real object adjacency matrix corresponding to the gravity center distance;
carrying out bitwise calculation on the reference adjacent matrix corresponding to each reference jigsaw puzzle image and the real object adjacent matrix respectively to obtain a calculation result corresponding to each reference adjacent matrix;
and taking the reference mosaic image corresponding to the reference adjacent matrix with the maximum calculation result as the mosaic image to be matched.
7. The method according to claim 5, wherein the object angle parameter of the object to be detected comprises at least one of an object angle value of the object to be detected and an object angle vector corresponding to the object to be detected, and the object angle vector corresponding to the object to be detected is generated according to a relative angle between the object to be detected and each of the other object objects to be detected.
8. The method according to claim 5, wherein the angle consistency condition includes that a weighted sum of target parameter difference values is less than or equal to a target weighted sum threshold, the target parameter difference value is a parameter difference value between the physical angle parameter and a reference angle parameter corresponding to the tile pattern to be matched, and the target weighted sum threshold is a preset weighted sum threshold corresponding to the physical angle parameter.
9. The method according to claim 2 or 4, wherein before said determining that a correctly placed target tile entity exists in said object to be detected, said method further comprises:
performing geometric analysis on the real-time image to obtain the vertex position of each spliced real object;
and calculating the gravity center distance between the real object blocks and the real object angle parameters of the real object blocks according to the vertex positions.
10. The method of claim 1, wherein after displaying a target fill pattern within a target shape region in the target graphic, the method further comprises:
and displaying target prompt information under the condition that the target graph is completely filled, wherein the target prompt information is used for prompting that all the spliced block real objects to be detected are correctly spliced.
11. A graphic display device, comprising:
a first display unit configured to display a target graphic;
the image acquisition unit is configured to acquire a real-time image, wherein the real-time image is an image comprising a to-be-detected spliced real object; and
and the second display unit is configured to display a target filling pattern in a target shape area in the target graph under the condition that the target tile real object which is placed correctly exists in the to-be-detected tile real object, wherein the target shape area is a relative placing area of the target tile real object in the target graph.
12. A graphic display device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the graphic display method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to carry out the graphical display method of any one of the preceding claims 1 to 10.
14. A computer program product, characterized in that it comprises a computer program which, when executed by a processor, causes the processor to carry out the graphical display method of any one of the preceding claims 1 to 10.
CN202110091009.8A 2021-01-22 2021-01-22 Graphic display method, apparatus, device and medium Pending CN114797084A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110091009.8A CN114797084A (en) 2021-01-22 2021-01-22 Graphic display method, apparatus, device and medium
PCT/CN2021/135688 WO2022156389A1 (en) 2021-01-22 2021-12-06 Graphic display method, apparatus and device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110091009.8A CN114797084A (en) 2021-01-22 2021-01-22 Graphic display method, apparatus, device and medium

Publications (1)

Publication Number Publication Date
CN114797084A true CN114797084A (en) 2022-07-29

Family

ID=82525073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110091009.8A Pending CN114797084A (en) 2021-01-22 2021-01-22 Graphic display method, apparatus, device and medium

Country Status (2)

Country Link
CN (1) CN114797084A (en)
WO (1) WO2022156389A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577185A (en) * 1994-11-10 1996-11-19 Dynamix, Inc. Computerized puzzle gaming method and apparatus
JP2015208576A (en) * 2014-04-28 2015-11-24 株式会社コロプラ Game program
CN111223045B (en) * 2019-11-15 2023-06-30 Oppo广东移动通信有限公司 Jigsaw method and device and terminal equipment
CN111359201B (en) * 2020-03-08 2023-08-15 北京智明星通科技股份有限公司 Jigsaw-type game method, system and equipment
CN112053281A (en) * 2020-09-12 2020-12-08 上海积跬教育科技有限公司 Intelligent identification method for tangram toy
CN112068920A (en) * 2020-09-22 2020-12-11 深圳市欢太科技有限公司 Content display method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2022156389A1 (en) 2022-07-28

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN113038264B (en) Live video processing method, device, equipment and storage medium
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN111161398A (en) Image generation method, device, equipment and storage medium
CN111127603B (en) Animation generation method and device, electronic equipment and computer readable storage medium
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN110084154B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN111292406B (en) Model rendering method, device, electronic equipment and medium
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN109816791B (en) Method and apparatus for generating information
CN114797084A (en) Graphic display method, apparatus, device and medium
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN115358958A (en) Special effect graph generation method, device and equipment and storage medium
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN110390717B (en) 3D model reconstruction method and device and electronic equipment
CN114419299A (en) Virtual object generation method, device, equipment and storage medium
CN113744379A (en) Image generation method and device and electronic equipment
CN112991147B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111292245A (en) Image processing method and device
CN113643350B (en) Method, device and terminal equipment for carrying out stereo measurement on video picture
CN117437288B (en) Photogrammetry method, device, equipment and storage medium
CN111784710B (en) Image processing method, device, electronic equipment and medium
CN111489428B (en) Image generation method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination