CN115580716B - Projection picture output method, system and equipment based on object module - Google Patents

Projection picture output method, system and equipment based on object module Download PDF

Info

Publication number
CN115580716B
CN115580716B CN202211576391.2A CN202211576391A CN115580716B CN 115580716 B CN115580716 B CN 115580716B CN 202211576391 A CN202211576391 A CN 202211576391A CN 115580716 B CN115580716 B CN 115580716B
Authority
CN
China
Prior art keywords
image
projection picture
projection
virtual image
object module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211576391.2A
Other languages
Chinese (zh)
Other versions
CN115580716A (en
Inventor
尹一笑
冯毅
曹邱晴
张元道
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jizhu Information Technology Shenzhen Co ltd
Puzanga Information Technology Nanjing Co ltd
Original Assignee
Jizhu Information Technology Shenzhen Co ltd
Puzanga Information Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jizhu Information Technology Shenzhen Co ltd, Puzanga Information Technology Nanjing Co ltd filed Critical Jizhu Information Technology Shenzhen Co ltd
Priority to CN202211576391.2A priority Critical patent/CN115580716B/en
Priority to CN202310652554.9A priority patent/CN116668653A/en
Publication of CN115580716A publication Critical patent/CN115580716A/en
Application granted granted Critical
Publication of CN115580716B publication Critical patent/CN115580716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of interactive projection, and discloses a projection picture output method, a projection picture output system and projection picture output equipment based on a real object module, which comprise the following steps: collecting an collected image comprising a projection picture and a real object module, positioning and extracting a reference line of the projection picture; positioning and extracting a first characteristic line of the real object module; and determining the size of the virtual image according to the first characteristic line and the reference line, keeping the size and the position of a projection picture unchanged, and displaying the virtual image in the projection picture. The beneficial effects are as follows: according to the method, the first characteristic line of the object module and the reference line of the projection picture are respectively obtained, and the size of the virtual image is determined according to the relative proportion relation between the first characteristic line and the reference line. The virtual image with the corresponding size can be fed back in the projection picture when the size of the projection picture is changed due to the change of the projection distance, so that the projection interactive display under the complex condition is satisfied.

Description

Projection picture output method, system and equipment based on object module
Technical Field
The invention relates to the technical field of interactive projection, in particular to a projection picture output method, system and equipment based on a real object module.
Background
The principle of the interactive projection system is that a target image is captured and shot through a capturing device, and then the captured image is analyzed by an image analysis system, so that a mode corresponding to a captured object is determined, a real-time image is projected according to the current mode state, and a tightly combined interactive effect is generated between a participant and screen content.
The projection interactive game is a game mode of playing the interactive game by utilizing projection, and the action of a player in a projection area is sensed through a sensing system and then analyzed and calculated so as to execute related game commands. According to different application scenes, the method can be divided into wall projection, ground projection and desktop projection. For children users, the game interaction performance assisted by the physical module is better, and products of interaction of the physical module and projection, such as a balloon beating game, a ground mouse beating game and the like, appear.
Projected interactive game devices on the market are common in commercial venues and are rarely used in home. Because the projection interactive game device has certain requirements on the projection distance and the installation position of the sensing system, the projection interactive game device is generally fixed after the configuration is completed through manual debugging configuration. When the user is at home, the size of the displayed pattern is different due to the different positions of the projection device placed by the user each time. Meanwhile, the size of the existing physical module is generally fixed, and when physical modules with different sizes are used, the projection interactive game equipment cannot be adapted and matched autonomously. It can be seen that when the projection interactive game device is used under a complex condition, if no manual position adjustment and correction are performed, the problem that the size of the physical module cannot correspond to the size of the display image is often encountered.
In the patent literature, for example, patent publication number CN104615328B, an invention patent named as a projection output method and an electronic device, discloses a projection output method and a wearable electronic device, and the wearable electronic device includes: the detection module and the projection output module; the method comprises the following steps: capturing an object in a projection range by using the detection module when a first operation of a user is received; the projection range is a range in which the projection output module can output projection content; acquiring characteristic parameters of a region to be projected; adjusting attribute parameters of the projection content based on the characteristic parameters; and controlling the projection output module to output the adjusted projection content on the object. However, in this patent document, the projected content is actually adjusted by the operation of the wearable electronic device by the human body, and the projected content is adjusted by adjusting the entire size of the entire projected screen at the same time, so that it is impossible to adjust only the projected screen content while keeping the projected screen unchanged.
Disclosure of Invention
In order to solve the problem that the size of a virtual image in a projection picture and the size of a physical module cannot be automatically corresponding in the prior art, the invention discloses a projection picture output method, a projection picture output system and projection picture output equipment based on the physical module.
The specific technical scheme is as follows: a projection picture output method based on a real object module comprises the following steps: collecting an collected image comprising a projection picture and a real object module, positioning and extracting a reference line of the projection picture in the collected image, and obtaining the length of the reference line in the collected image; acquiring the pixel length of a reference line in a projection picture; positioning and extracting a first characteristic line of the object module in the acquired image, and acquiring the length of the first characteristic line in the acquired image; and determining the size of the virtual image in the projection picture according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image, the pixel length of the reference line in the projection picture, the original resolution of the virtual image and the preset proportional relation between the virtual image actually projected and the size of a real object.
The projection picture in the collected image comprises at least one alternative reference line, such as an edge line of the projection picture, a line arranged in the projection picture, a key point connecting line and the like, which are common, can be selected according to the needs when in use, the reference line is a measure of the relative size of the projection picture in the collected image, the reference line is changed along with the change of the size of the projection picture, and the reference line is generally unchanged after the selection.
At least one first characteristic line is also corresponding to the object module in the acquired image, such as a line at the edge of the module, a line formed by a marking pattern on the upper surface of the module and the like; the first characteristic line is a reference for representing the relative size of the object module in the acquired image, and can be set according to the needs when in use.
For the real object modules with different sizes or the projection pictures with different sizes, the system can determine the sizes of the virtual images which are actually projected and correspond to the different real object modules by taking the reference line as a reference. According to the proportional relation between the first characteristic line and the reference line, the size proportional relation between the object module and the projection picture can be reflected, and the size of the corresponding virtual image in the projection picture is further determined, so that the sizes of the virtual images projected by different object modules and different projection pictures can be matched.
Further, for the case that the virtual image is a template image, a second characteristic line is set in the original template virtual image, and the system determines the size of the virtual image in the projection picture through the proportional relation set between the first characteristic line and the second characteristic line. Specifically, the common second feature line may be one side of the template virtual image, such as a side corresponding to the width or height; or a line segment which is required to be matched with the physical size in the template image, for example, the template virtual image is a racing track image, a line segment which represents the width of the track in the image can be set as a second characteristic line, and the actual projected width of the track and the width of the physical racing model meet the set proportion requirement.
A projection picture output method based on a real object module comprises the following steps: collecting an collected image comprising a projection picture and a real object module, positioning and extracting a reference line of the projection picture in the collected image, and obtaining the length of the reference line in the collected image; acquiring the pixel length of a reference line in a projection picture; positioning and extracting a first characteristic line of the object module in the acquired image, and acquiring the length of the first characteristic line in the acquired image; and determining a size parameter of a generated graph in a projection picture according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image and the pixel length of the reference line in the projection picture, and generating a corresponding virtual image to be displayed in the projection picture by combining the attribute parameters of the graph. The virtual image specifically comprises a pre-stored template image and a geometric figure which is autonomously generated by the information processing unit.
Further, the real object module comprises a first sub-module, and for the case that the virtual image is a template image, the corresponding virtual image and the original resolution of the image are determined according to the information contained in the first sub-module; for the case where the virtual image is a geometric figure, the type, style, shape, size, original resolution, or attribute parameters of the generated figure, such as color, line width, internal filling figure, etc., are determined according to the information contained in the first sub-module.
Further, the first sub-module is an image presented in the acquired image by the object module, specifically may be image data constructed by the object module itself and presented in appearance, or may be specific image content printed on an appearance surface facing the camera.
Further, the first sub-module may also be an induction element included in the physical module, specifically may be a radio frequency tag in the physical module, and the corresponding system obtains the identification information through a radio frequency information reading unit. The system can acquire the identification information through the corresponding information reading unit, and the different information carriers and the acquisition modes do not affect the realization of the invention.
When the acquired image is processed, the system judges the position between the object module and the projection picture in the acquired image, and judges whether the object module is positioned in the projection picture or positioned outside the projection picture. And comparing the superposition area of the projection picture and the object module with the corresponding area of the object module in the acquired image to determine whether the object module is positioned in the projection picture or outside the projection picture. Specifically, if the overlapping area ratio exceeds 90% of the area of the corresponding area of the physical module (the ratio can be set according to the requirement), the physical module is considered to be in the projection picture, and the overlapping area is taken as the area corresponding to the physical module in the subsequent processing. In actual calculation, the projection screen region may be approximated by a reference rectangular region corresponding to the projection screen, and the object block region may be approximated by a reference rectangular region corresponding to the object block.
As shown in fig. 1, in the image processing, the target object in the image to be analyzed is often irregular, and a rectangular area may be used to approximately represent the corresponding area where the target is located, and this area is referred to as a reference rectangular area, and a method for determining the reference rectangular area corresponding to the target is described in detail later.
Further, for the case that the object module is outside the projection picture area, the system controls the display alignment mode of the virtual image in the projection picture to be centered alignment, left alignment, right alignment, up alignment or down alignment.
As shown in fig. 2, when the object is in the projection screen area, the virtual image is displayed in an overlaid manner. Specifically, the system acquires a reference rectangular area corresponding to a physical module in the acquired image and four vertexes corresponding to the rectangle, determines the central position of the area according to the positions of the four vertexes, converts the coordinate of the central position under the acquired image reference frame into the coordinate under the projection picture reference frame, and sets the coordinate as the position coordinate of the virtual image in the projection picture, namely, the virtual image center is aligned with the physical center, and the coordinate is displayed in a covering manner.
As shown in fig. 3, for the case that the object is in the projection screen area, the virtual image is displayed in a manner of interval display, the system acquires the reference rectangular area corresponding to the object in the acquired image, calculates the distance from each side corresponding to the reference rectangular area to each side of the projection screen area, determines the position of the virtual image in the projection screen according to the preset interval distance at one side of the maximum distance, and projects the virtual image at the corresponding position in the projection screen and at one side of the maximum distance.
As shown in fig. 4, for the case that the object is in the projection screen area, the display mode of the virtual image is adjacent display (alignment display based on line segments), the system determines a third characteristic line corresponding to the object module in the acquired image, and in the acquired image range, the third characteristic line may be in the area corresponding to the object or may be out of the area corresponding to the object; preferably, the third characteristic line may be determined according to the key point or determined according to the reference rectangular area, the position coordinate in the collected image is determined, the position coordinate of the third characteristic line in the projection picture is obtained through coordinate conversion, the position of the virtual image is adjusted in the projection picture, the second characteristic line of the virtual image is placed in parallel or in superposition with the third characteristic line of the real object module, the midpoint of the second characteristic line is on the middle vertical line of the third characteristic line, and the virtual image is located on the other side of the third characteristic line relative to the real object module. In particular, in practical applications, the third feature line may be set to coincide with the first feature line.
As shown in fig. 5, for the case that the real object is in the projection picture area, the virtual image is displayed in a surrounding manner, the system acquires a reference rectangular area corresponding to the real object in the acquired image, calculates the position coordinates of the circumcircle center of the area and the circumcircle radius; converting the position coordinates and the radius length into corresponding position coordinates and the radius length under a projection picture reference system; the virtual image is a sector-shaped circular ring, the inner diameter of the sector-shaped circular ring is 1.1-3 times of the radius of the circumscribing circle of the real object module, and the outer diameter of the sector-shaped circular ring is 1.1-4 times of the inner diameter; and projecting corresponding virtual graphics according to the center position coordinates and the inner arc radius and the outer arc radius. The setting of the radius relation coefficient mainly considers that the real object and the virtual image are convenient to be observed as a whole when a user actually uses the radius relation coefficient, and the excessively narrow or excessively large sector-shaped circular ring is unfavorable for the user to use and observe.
Further, the system detects the moving condition of the object module after being placed in real time through analyzing the continuously acquired images, specifically calculates the rotating included angle relative to the initial placement position of the object module, and outputs corresponding graphic content according to the included angle information.
Further, the real object module further comprises a second sub-module, and the position relation mode of the virtual image in the projection picture is automatically determined according to the second sub-module. The second sub-module and the first sub-module can be image data presented in an image by the object module or can be an induction component.
It should be noted that the first sub-module and the second sub-module may be the same pattern, structure, or sensor chip, or may be different patterns, structures, or chips.
Further, for the case that the object module is outside the projection picture, according to the information of the second sub-module, the relative position relationship of the virtual image in the projection picture can be automatically determined, and corresponding matching is performed in the modes of centering display, left alignment display, right alignment display, upper alignment display and lower alignment display.
Further, for the case that the object module is in the projection screen, according to the information of the second sub-module, the relative position relationship between the virtual image and the object can be automatically determined to be one of the following display modes: overlay display, gap display, adjacent display, or surround display.
The first characteristic line is a line segment representing the relative size of the object module in the acquired image, and can be a connecting line of key points on the object module or a side of a reference rectangular area corresponding to the object module according to different acquisition methods.
Further, as shown in fig. 6, the method for acquiring the first characteristic line of the object module in the acquired image includes: a mark image is arranged at the position of the key point on the surface of the real object module, and the mark image at the position of the key point of the real object module in the collected image is directly detected; determining the position of a marked image, wherein the central position of the marked image is used as the position of a key point, and the connecting line of the key point is used as a first characteristic line corresponding to a real object; specifically, the position of the marked image can be determined by matching the marked image with a template, namely, the system compares and analyzes the known marked image and the image characteristics in the acquired image; if the red circular mark pattern is formed, the mark image can be determined through image characteristic analysis and shape characteristic analysis of a red channel of the acquired image; the known marker images can also be trained and learned through an artificial intelligence image detection algorithm to determine a prediction model for detecting the marker images, and then the marker images are detected in the acquired images through the prediction model to determine the key point positions. Preferably, the mark images are set at four end points of the rectangular area corresponding to the outline of the upper surface of the object as key points, the sides corresponding to the rectangle can be selected as the first characteristic lines, and the system can determine the reference rectangular area and the first characteristic lines corresponding to the object by detecting the four key points. Here, it should be noted that, setting the key points at the end points of the outline of the object is helpful for directly characterizing the size of the object, and in actual use, when the correspondence between the first feature line formed by the corresponding key points and the size of the object is known, the key points may be preset at other positions on the surface of the object.
Further, as shown in fig. 7, the method for acquiring the first characteristic line of the object module in the acquired image includes: and detecting a contour area corresponding to the object module in the acquired image. Specifically, as the background picture of the acquired image is relatively simple, the region corresponding to the outline of the object module is easy to acquire through an image segmentation algorithm, or the region corresponding to the outline of the object module is determined through comparing the acquired image of the placed object module with the acquired image of the non-placed object module through an image difference method, and the region is the reference region corresponding to the object module; further, a corresponding reference rectangular region can be determined based on the reference region. Preferably, the reference rectangular region corresponding to the reference region is determined by selecting the smallest circumscribed rectangle of the reference region, the largest inscribed rectangle corresponding to the reference region, or a square having the same area as the reference region and the same area as the reference region, and then selecting either side of the reference rectangular region as the first feature line.
Further, as shown in fig. 8, the method for acquiring the first characteristic line of the object module in the acquired image includes: and acquiring a reference rectangular area corresponding to the object module in the acquired image through an artificial intelligent image target detection algorithm. The detection model is determined through acquisition training of known target object images, a rectangular area with the maximum probability of containing the target object module images in the acquired images is determined to be used as a reference area by using the artificial intelligent model, and the reference rectangular area is determined by using the reference area. The reference region may be generally used as a reference rectangular region directly, or may be used as a reference rectangular region after being adjusted according to the range of the target object, for example, the target object associated with the reference region is divided according to the analysis of the image content in the reference region; for the situation that a plurality of object modules exist in the acquired image, the object modules can be sequentially selected according to the likelihood probability, a threshold value is set, and the area lower than the threshold value is removed. Specific target detection algorithms may be found in the literature You Only LookOnce:unified, real-Time Object Detection (ISBN Information: electronic ISSN: 1063-6919), or in other literature relevant to image target detection methods. The system selects one side of the reference rectangular area as the first feature line.
Further, the method for acquiring the first characteristic line of the object module in the acquired image comprises the following steps: the system acquires three-dimensional image information through the three-dimensional image acquisition equipment, utilizes the fact that the depth information of the plane where the object module and the projection picture are located is different, determines a contour reference area corresponding to the object, and further uses the reference area as a benchmark to determine a corresponding reference rectangular area. Further, one side of the reference rectangle is selected as the first feature line. The corresponding key points can be detected by utilizing the difference of the depth information of the protruding part of the key point position on the real object module and other positions, and the connecting line of the key points is used as the first characteristic line. The three-dimensional image acquisition device herein includes, but is not limited to, a binocular vision imaging device, a structured light-based three-dimensional imaging device, a three-dimensional laser scanning device, and a TOF sensor device.
The reference line is a line segment representing the relative size of the projection picture in the acquired image, and may be a line connecting key points on the projection picture or a side of a reference rectangular area corresponding to the projection picture according to different acquisition methods.
Further, as shown in fig. 9, the reference line of the projection picture in the acquired image is determined according to the reference rectangular area corresponding to the projection picture. From an image feature analysis of the projection screen within the acquired image, a reference rectangular region corresponding to the projection screen may be determined. Preferably, the projection picture is controlled to project a picture with a single color, and the area where the projection picture is positioned is obtained from the acquired image through color area segmentation; or projecting black and white grid pictures, and determining the area where the projection pictures are positioned through feature analysis; further, a corresponding reference rectangular region is defined based on the region, and one side of the rectangle is selected as a reference line. And determining the pixel length of the reference line under the reference frame of the projection picture, wherein the pixel length corresponding to the projection picture width is the pixel value with wide resolution of the projection picture, and the pixel distance corresponding to the projection picture height is the pixel value with high resolution of the projection picture. The method for determining the rectangular area where the projection picture is located by the image processing technology is disclosed in many ways, and common methods such as contour detection, area shape detection, foreground segmentation and the like can be used; the projection of a specific pattern can also help to improve the accuracy of the detection of the projection picture in the acquired image, and the three-dimensional image data can also be used for determining the corresponding reference area of the projection picture according to the depth information, so that the proper method can be selected for equivalent replacement. For the case that the preliminary detection area is not rectangular, the system can perform appropriate correction on the standard of the detection area, and acquire a reference rectangular area corresponding to the projection picture. In fig. 9, the projection screen 40 overlaps with its own reference rectangular region 50.
As shown in fig. 10, the reference line of the projection screen 40 in the acquired image 30 may also be determined according to the corresponding key point 60 connection line in the projection screen 40. The system controls the projection picture to project a mark pattern, such as a green square, at the position corresponding to the key point, and the position of the mark pattern in the acquired image can be determined through image feature matching; or the system controls the projection equipment to project a flickering pattern mark at the key point position, and the position of the mark image in the acquired image can be rapidly determined through image difference; and taking the central position of the marked image as the position of the key point of the projection picture, and taking the connecting line of the two corresponding key points as the corresponding reference line of the projection picture. The system calculates the pixel length of the reference line in the projection picture according to the coordinates of the key points in the projection picture. Preferably, four end points of a reference rectangle are marked at positions near four corners of the outline of the projection screen as key points, and the positions corresponding to the key points in the acquired image are determined by projecting marked images at the key points, so that the reference rectangle area corresponding to the projection screen is determined.
In the invention, the length of a first characteristic line and the length of a reference line in a reference system with an acquired image are calculated, and the length of a pixel of a virtual image in the reference system with a projection picture is obtained through formulas and conversion.
Further, according to at least three reference points (three points may constitute two intersecting vectors so that coordinates of any point in a reference frame may be represented by coordinates of three points, and their relative relationships are the same in the acquired image reference frame and the projected image reference frame so as to perform coordinate conversion) which are not on the same straight line and correspond to each other in the projected image (including edges), and coordinates of the above reference point in the acquired image, coordinates of the above reference point in the projected image reference frame, when coordinates of the point in the acquired image are known for any point on the projected image, coordinates of the point in the projected image may be calculated according to the above reference point information; or knowing the coordinates of the point in the projection screen, the coordinates of the point in the acquired image can be calculated according to the reference point information. Preferably, four vertexes of a reference rectangular area corresponding to the projection screen outline in the acquired image are generally selected as reference points for coordinate conversion reference. Further, the projection picture can be divided into a plurality of subareas, and the coordinate conversion of the position in each subarea uses the vertex coordinate data of the reference rectangular area corresponding to the subarea, so that more accurate position coordinates can be obtained.
In addition, it should be noted that the matching between the virtual image and the physical module may be one-to-one complete matching, or may be one-to-many matching of multiple virtual images corresponding to one physical module, and multiple local corresponding areas divided on the physical module may be used to match different virtual images respectively, where the local corresponding areas may be determined directly by image processing detection or may be determined according to a relative relationship with the physical corresponding areas.
There are many methods that can be directly replaced by the target area detection and identification technology used in the method for detecting the relevant points, lines and areas of the object and projection images in the collected image, in which the relevant technology or other disclosed image processing technology in the reference of the present invention can be selected for equivalent replacement under the condition that the targets of image processing are the same, and this is not listed, for example, refer to "OpenCV 4 computer vision Python language implementation" (ISBN: 9787111689485, [ Joseph Howse), [ jojojo Mi Niji nuo (Joe minichno), liu Bing, gao Bo translation). The method can also comprehensively use the detection and positioning methods of the targets in various images at the same time, and fuse a plurality of positioning results to obtain the position information with higher precision.
Furthermore, because the mark image of the key point is much smaller than the projection picture or the real object module, the key point is easily interfered by other areas when being directly detected in the acquired image, and the non-key point area is identified as the key point area; the key points are determined by the corresponding reference rectangles of the real objects or the projection pictures, so that the key points are relatively difficult to identify by mistake, but the accuracy of the key point positions is slightly poor. The key points are detected in the relevant local range corresponding to the rectangular area by detecting the reference rectangular area corresponding to the real object module or the projection picture, so that the false recognition is avoided, and the accuracy of the key point positions is improved. The method is a further optimization scheme of the method, and the accuracy of positioning the key points can be further improved.
Further, the system determines a reference rectangular area corresponding to the object in the acquired image and a reference rectangular area corresponding to the projection picture. And calculating an included angle between the bottom edge of the rectangular area corresponding to the real object and the bottom edge of the rectangular area corresponding to the projection picture by the system. In the case that the virtual image is a template image, the angle corresponding to the rotation of the virtual image is controlled according to the included angle, and the virtual image is preferably rotated around the central position of the virtual image, so that the bottom edge of the circumscribed rectangle of the virtual image in the projection picture is parallel to the bottom edge of the corresponding rectangle area of the object, and the rotated virtual image is displayed. In the case of generating a pattern from a virtual image, a rotation angle parameter is added when generating a pattern, so that the generated pattern and the real object reference rectangle are parallel. In particular, when the projection picture is rectangular and the reference rectangular area of the projection picture is the outer contour line of the projection picture, the included angle between the bottom edge of the reference rectangular area of the object module and the bottom edge of the projection picture is calculated, so that the rotation angle of the virtual image can be calculated.
Further, before the projection device is used, the method may further include the step of judging whether an actual picture projected by the projection device is rectangular, and if not, performing the picture trapezoidal correction of the projection device.
Further, the method can further comprise the steps of judging whether a projection picture in an acquired image is rectangular, if not, performing geometric correction on the image, processing by using the corrected acquired image, and storing correction parameters for conversion of the position in the subsequent corrected image and the position in the original image; if yes, no processing is performed.
Further, the system controls the projector to project black and white grid images to acquire corresponding acquired images, and the mapping relation between the picture coordinates of the projector and the acquired image coordinates can be established more accurately. Generally, four points of a rectangular area are taken on a projection picture, the coordinates of the four points in the projection picture and the coordinates of the collected image are respectively obtained, so that a mapping relation can be established, and other points can be converted into coordinates according to the corresponding relation. The method has the advantages of simplicity and rapidness, but when the local geometric deformation of the projection picture exists in the acquired image, errors are easy to cause, the mapping relation can be respectively built in the local area of each grid through the grid image, namely four vertexes of each grid are used as coordinate reference points, the coordinate conversion in each grid is converted according to the coordinate reference relation built by the four vertexes of the grid, and the acquired coordinates are more accurate.
In the above description, the image data actually used by us are mainly planar, it is easy to think that related devices such as a depth camera or a 3D scanner can be used in the image acquisition stage to acquire three-dimensional data, and the key position information is determined through the analysis of three-dimensional features of the image and then processed according to the steps, which is also the content of the present invention. Common three-dimensional information acquisition techniques such as purely binocular sensors, structured light sensors, line laser sensors, TOF sensors, etc. can be implemented instead in this example.
In the above description, it is easy to think that the image is acquired by a plurality of cameras and then stitched, and the range over which the acquired image actually covers can be increased by performing the above processing by the stitched image.
In the above description, when the display screen is relatively large, a plurality of projector projection screens are needed to be spliced, and only the mapping relation between the coordinates in each screen and the original complete screen coordinates is determined, and the corresponding coordinate conversion is performed, so that the implementation of the invention is not affected.
In the above description, the virtual image may not be full of projected frames, at which time the system fills the appropriate background, e.g., a solid black background, highlighting the virtual image; other contents such as images and moving images may be used; it is also possible to display a margin and turn down the brightness of the corresponding area.
The beneficial effects of the invention are as follows: according to the method, the size of the virtual image is determined by respectively acquiring the first characteristic line and the reference line in the object module and the projection picture in the acquired image and according to the relative proportion relation between the first characteristic line and the reference line and the preset proportion relation between the virtual image actually projected and the size of the object. The virtual images with the corresponding sizes of the objects can be automatically fed back when the object modules with different sizes use projection interaction and when the size of a projection picture is changed due to the change of the projection distance, so that the interactive display of the projection images under the complex condition is met.
Drawings
Fig. 1 is a schematic view of a reference rectangular area of the present invention.
Fig. 2 is a schematic diagram of an overlay display of the present invention.
Fig. 3 is a schematic diagram of a spacer display of the present invention.
Fig. 4 is a schematic view showing adjacent display of the present invention.
Fig. 5 is a schematic view of the present invention shown in surrounding fashion.
Fig. 6 is a schematic diagram of key points of a first feature line of the present invention.
Fig. 7 is a schematic view of a reference rectangular area of a first feature line of the present invention.
Fig. 8 is a schematic diagram of a reference rectangular region with the maximum probability of containing an image of a target object module according to the invention.
Fig. 9 is a schematic view of a reference rectangular area of a reference line of the present invention.
FIG. 10 is a schematic diagram of the key points of the reference line of the present invention.
Fig. 11 is a schematic reference diagram of the present invention.
Fig. 12 is a schematic view of a first embodiment of the present invention.
Fig. 13 is a schematic view of the puzzle assist system of the present invention.
Fig. 14 is a schematic view of a second embodiment of the present invention.
Fig. 15 is a schematic view of a first embodiment of a mat structure according to the present invention.
Fig. 16 is a schematic view of a second embodiment of a mat structure according to the present invention.
Fig. 17 is a reference line schematic diagram of a second embodiment of the present invention.
Fig. 18 is a schematic diagram of virtual image orientation of the present invention.
Fig. 19 is a schematic diagram of virtual image orientation determination in the second embodiment of the present invention.
Fig. 20 is a schematic diagram illustrating still another virtual image orientation determination according to the second embodiment of the present invention.
Fig. 21 is another virtual image orientation determination diagram in the second embodiment of the present invention.
Fig. 22 is a schematic diagram of the embodiment of fig. 21.
Fig. 23 is a schematic view of a projection screen division placement area in the second embodiment of the present invention.
Fig. 24 is a schematic view of a third embodiment of the present invention.
Fig. 25 is a schematic diagram of a template image of an application scene according to a third embodiment of the present invention.
Fig. 26 is a schematic diagram of a geometric diagram of an application scenario according to a third embodiment of the present invention.
Fig. 27 is a second schematic view of an application scenario of the third embodiment of the present invention.
Fig. 28 is a three-schematic view of an application scenario of the third embodiment of the present invention.
Fig. 29 is a schematic view of a fourth embodiment of the present invention.
Fig. 30 is a schematic view of a fifth embodiment of the present invention.
Fig. 31 is a schematic view of a third embodiment of a mat structure according to the present invention.
FIG. 32 is a schematic view of a fourth embodiment of a mat structure according to the present invention.
Wherein:
10-a physical module; 20-a virtual image; 30-acquiring an image; 40, projecting a picture;
50—reference rectangular area; 60-key points; 80-a mat;
81-an upper cushion layer; 82 under-pad layer; 811-a pattern layer; 812-a stop lever; 813-limit protrusions;
814-groove; 815-an anti-skid fixing strip;
83—a clamping bar; 84-clip; 90-template paper; 91-limit holes.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a projection picture output method, which aims to simply and rapidly match the size of a physical module with the size of a virtual image so as to conveniently finish a subsequent interactive game. The method is mainly executed by projection equipment and comprises the following steps: collecting an collected image comprising a projection picture and a real object module, positioning and extracting a reference line of the projection picture in the collected image, and obtaining the length of the reference line in the collected image; acquiring the pixel length of a reference line in a projection picture; positioning and extracting a first characteristic line of the object module in the acquired image, and acquiring the length of the first characteristic line in the acquired image; and determining the size of the virtual image in the projection picture according to a preset proportion according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image and the pixel length of the reference line in the projection picture.
In the invention, because the physical module and the virtual image are not in the same reference system, the pixel length of the physical module in the projection picture corresponding to the physical module is not easy to determine, and therefore, the size of the physical module in the acquired image is indirectly converted into the corresponding size in the projection picture by utilizing the characteristic that the reference line can be identified and measured in the acquired image and the projection picture, and the pixel length of the corresponding virtual image in the projection picture is calculated according to the preset proportion, thereby achieving the purpose that the physical modules with different sizes correspond to the virtual images with different sizes.
The virtual image can be a template image stored in the system or acquired by a server side, or can be a geometric figure generated by the system autonomously. When the virtual image is a template image, the preset proportional relation between the actually projected virtual image and the size of the real object is characterized by the relation between the second characteristic line and the first characteristic line of the virtual image. The system acquires the original resolution of the pre-stored virtual image, and determines the pixel length corresponding to the second characteristic line of the virtual image in the original resolution. Since the virtual image is finally displayed in the projection screen, the final calculation obtains the size of the virtual image at the resolution of the projection screen. The length of the first characteristic line and the length of the reference line can be directly obtained by taking the acquired image as a reference system, and the length of the second characteristic line by taking the acquired image as the reference system can be obtained according to the preset proportion of the first characteristic line and the second characteristic line. According to the second characteristic line length, the reference line length and the pixel length of the reference line, the corresponding pixel length of the second characteristic line in the reference frame with the projection picture can be calculated. The size of the virtual image in the projection frame can be determined according to the pixel length of the second characteristic line in the projection frame and the length of the second characteristic line in the original virtual image.
When the virtual image is a geometry that is autonomously generated by the system, the virtual image has no original resolution, and its display resolution is consistent with the projection screen resolution. At this time, according to a preset proportional relationship, the parameters related to the first feature line and the reference line can determine the pixel length of the critical parameter related to the size of the simple geometric image in the projection picture, and the critical parameter includes side length, radius, and the like.
As shown in fig. 11, an image including the entire projection screen and the object module is acquired, and the first feature line length of the object module and the reference line length corresponding to the projection screen in the acquired image are determined using the aforementioned image processing method.
Recording a reference line corresponding to the projection picture 40 as A1A2; the first characteristic line corresponding to the object module is B1B2, the acquired Image 30 is denoted as Image1 by reference frame, the inner dotted line area is the projection Image 40 in the acquired Image 30, the projection Image 40 is denoted as Image2 by reference frame, and the original template Image is denoted as Image3 by reference frame. By the above-mentioned Image processing method, the coordinates of A1, A2, B1, B2 in the acquired Image are determined and respectively denoted as A1 (xA 1, yA 1), A2 (xA 2, yA 2), B1 (xB 1, yB 1), B2 (xB 2, yB 2), and the distance from A1 to A2 is LA1a2_image 1, i.e., the reference line length, and the distance from B1 to B2 is LB1 b2_image 1, i.e., the first feature line length, in the acquired Image as the reference frame. The system obtains the pixel length of A1A2 in the reference frame with the projection picture, and the pixel length is denoted as LA1A2_Image2.
When the virtual image is a template image, the template image can be set by a user or determined according to information contained in a physical object. As shown in fig. 11, the second characteristic line of the virtual image is C1C2, and the ratio of the first characteristic line length of the pre-set object module to the second characteristic line pixel length of the virtual image is K1 with the projection frame as the reference frame. And acquiring the pixel length of the second characteristic line in the original template Image, and recording the pixel length as LC1C2_Image3, namely the pixel length between key points C1 and C2 of the template Image under the original resolution condition. The original resolution of the virtual template Image obtained by the system is X_Image 3X Y_Image3.
The virtual image is of a size in reference to the projected picture,
(la1a2_image2/la1a2_image1) lb1b2_image1 represents the corresponding pixel length of the first feature line in the projection frame as reference, and (la1a2_image2/la1a2_image1) lb1b2_image1 represents the corresponding pixel length of the second feature line in the projection frame as reference.
It should be noted that, since the aspect ratio of the template virtual image is already determined, when the horizontal pixel length or the vertical pixel length is determined, the other side is also uniquely determined, so that the template virtual image can be calculated alternatively.
When the virtual image is a geometric figure generated by the system according to the parameters, the system determines the size of the figure according to the length of the first characteristic line of the object module in the projection picture, and determines the displayed content by combining the shape and other related parameters. Common patterns such as rectangles, regular polygons, circles, sectors, sector rings, etc., the pattern shape may be determined according to user settings or according to information contained in the first sub-module of the object. Specifically, when the object is rectangular, the length and the width of the rectangle are determined according to the length of the first characteristic line of the object in the projection picture; when the projection screen is round or fan-shaped, the corresponding radius is determined according to the length of the first characteristic line of the object in the projection screen; when the object is a regular polygon, the side length is determined by the length of the first characteristic line of the object in the projection picture; when the ring is a sector ring, the inner diameter and the outer diameter are respectively determined by the length of the first characteristic line of the object in the projection picture.
In particular, in one embodiment, the projected virtual image is a rectangle, and its width L1 in the actual projection image is M1 times the length of the first feature line, and its height H1 is M2 times the length of the first feature line.
According to the calculation results L1 and H1, the pixel values of the width and height of the rectangular graph in the projection picture can be determined, so that the corresponding rectangular graph can be determined. The line width, filling color, and other common graphic related attributes of a specific graphic can be set by a user or a system, or can be determined by the system according to information contained in a physical object, and other graphics are similar to the above.
It should be noted that, generally, before using the projection apparatus, the projection image is rectangular and corrected, so as to ensure that the projected projection image is rectangular, and the presently disclosed technical method is to correct the projection image by a man-machine interaction mode or automatically correct the rectangle by a system, so that the comfort of watching projection is improved by the rectangular corrected projection image, and more accurate analysis and processing of the projection image are facilitated.
When the device for capturing images is facing the projection screen, the projection screen within the captured images is generally positive, i.e., the bottom edge of the projection screen within the captured images is parallel to the bottom edge of the captured images. For the situations of non-parallelism and larger deviation, in order to improve the accuracy of calculated coordinates, coordinate conversion and image recognition are facilitated, the projection module can carry out image geometric correction according to a projection picture in an acquired image through projecting a black-and-white grid chart, the bottom edge of the projection picture after correction is parallel to the bottom edge of the acquired image, and the system stores the mapping relation of the coordinates in the images before and after correction, so that subsequent coordinate conversion is facilitated.
The following is a detailed description of specific embodiments of different application scenarios.
First embodiment as shown in fig. 12, using desktop projection, the object module is a puzzle information block (puzzle piece for short). The interactive process is that the user places the jigsaw information block in the acquired image picture, and the system determines the area to be jigsaw according to the jigsaw information block and projects the corresponding outline. And the user places the spliced small block puzzles in the outline area, and the system detects the small block puzzles placed by the user later and performs corresponding interaction.
The user places the jigsaw information block in the image collecting picture, the system collects the image containing the jigsaw block and the projection picture, the difference result containing the image without the jigsaw block is obtained according to the image difference method, the noise area in the difference result is removed, the circumscribed rectangular area corresponding to the area where the jigsaw block is located is determined, and the image content on the jigsaw block is determined. The method of detecting the region of the puzzle piece in the captured image may be replaced by other image processing methods.
As shown in fig. 12, the first feature line of the puzzle piece is the width B1B2 of the reference rectangular region and the height B1B4 of the reference rectangular region, and the distances within the captured Image are lb1b2_image1, lb1b4_image1.
The system identifies the content of the image on the puzzle piece, namely the information contained in the first submodule, determines that a rectangular frame graph needs to be projected according to the image information, the width of the rectangle is 5 times of B1B2, the height of the rectangle is 8 times of B1B4, and the size corresponds to the actual size of the complete puzzle. The line width of the rectangular frame defaults to 20, the color is gray, and the attribute parameters can be selected and adjusted by a user or can be determined by the jigsaw puzzle blocks. Here, the graphics corresponding to the puzzle pieces may be other graphics besides rectangles, and other attribute parameters may be set as required.
When the corresponding jigsaw content in the system is single, the attribute parameter can take a fixed value; when the corresponding jigsaw type in the system is more, different jigsaw information blocks correspond to different attribute parameters (such as the shape, thickness, background content, color and other common graphic parameters of the graphics), then the system automatically calls the corresponding attribute parameters through identifying the information of the first identification sub-module of the jigsaw information blocks, so that the use experience of a user can be improved, and the requirement of automatically matching different parameters of different jigsaw can be met; the specific system can identify the pattern information on the surface of the information card to obtain the corresponding information.
As shown in fig. 12, before placing a real object, the system controls the projection screen to project a key point mark Image at a selected point A1 and A2, the system determines the position coordinates of the point A1 and the point A2 in the acquired Image by the Image feature matching method, calculates to obtain the distance LA1a2_image1 of the point A1 and the point A2 in the acquired Image, and calculates to obtain the pixel distance LA1a2_image2 of the point A1 and the point A2 in the projection screen according to the set coordinates.
According to the above-described parameter calculation formulas (3), (4) of the virtual graphics, the graphics size parameters to be projected in the projection screen can be determined. Several alternative methods of the detection calculation method for the points A1, A2, B1, and B2 described in detail in the summary of the invention need to be described here, and in this embodiment, substitution may be selected as needed, which is not described here in a one-to-one correspondence.
In this embodiment, besides pattern identification, the attribute parameters in the puzzle piece may also be identified by an induction element, most commonly a radio frequency identification tag, where the puzzle piece includes a radio frequency identification tag, and the system places a radio frequency information reading device in the plane where the projection screen is located, and determines the attribute parameters of the corresponding virtual image by reading the radio frequency information. Compared with the method for identifying information through image identification, the method for identifying information by the sensor tag has higher accuracy; when the content of the image scene is too complex, the situation of false identification is easy to generate; however, the method for sensing the tag identification information is correspondingly required to be added with corresponding hardware configuration, and the cost convenience is not as good as that of the scheme of image recognition; therefore, the selection can be performed in combination with the actual application requirements.
Further, by comparing the coincidence degree of the corresponding area of the jigsaw information block and the corresponding area of the projection picture, whether the jigsaw information block is placed in the projection picture or not is automatically determined. Specifically, according to the method, the reference rectangular area corresponding to the jigsaw information block can be determined in the acquired image, the reference rectangular area corresponding to the projection picture is calculated, then the area ratio of the overlapping part area of the two areas to the reference rectangular area corresponding to the jigsaw information block is calculated, if the area ratio is higher than the set threshold value, the jigsaw block is considered to be in the projection picture, otherwise, the jigsaw block is considered to be out of the projection picture.
Regarding the position selection of the graphics in the projection picture, when the puzzle pieces are placed outside the projection picture, the system selects a proper area position in the projection picture or automatically matches and determines the placement mode according to the information contained in the second recognition sub-module of the puzzle information pieces, which are common such as the placement modes of centering, left alignment, right alignment, upper alignment, lower alignment and the like.
When a puzzle piece is placed in a projection screen and the puzzle piece is not a small puzzle within a complete puzzle, the system generally selects an appropriate location within the projection screen where the puzzle piece does not overlap with the puzzle piece to project the image. The system takes the rectangular area corresponding to the jigsaw information block as a corresponding reference rectangular area, preferably, the area range corresponding to the projection picture in the acquired image is acquired, the corresponding reference rectangular area is determined, the distance from each side of the information block reference rectangular area to each side corresponding to the projection picture reference rectangular area is calculated, the position of the virtual image in the projection picture is determined at one side of the maximum distance, and the preferred position is set according to the preset interval distance or is set in the middle in the side area. The above-mentioned area range corresponding to the projection picture in the collected image may be determined by controlling the projector to project a set picture, for example, to project a background picture of a single color, and then detecting the corresponding area in the collected image by an image processing method, and the specific method may refer to the above-mentioned summary.
When the jigsaw block is placed in the projection picture and is also a small jigsaw in the complete jigsaw, the system identifies and compares the jigsaw block, determines the relative position relation of the jigsaw block in the complete jigsaw, and projects the corresponding outline pattern of the complete jigsaw by combining the current position of the jigsaw block. Preferably, the puzzle piece is a small piece puzzle at the top left corner of the complete puzzle, and the system determines the contour position of the complete puzzle according to the contour position of the small piece puzzle.
Further, in order to enhance the interactive experience of the system, the system determines whether the projected virtual image exceeds the area range of the projection screen, which is available for displaying the virtual image, and if so, prompts the user to adjust the position of the puzzle piece, so that the projection space of the virtual image is larger. Further, the system calculates and judges whether the virtual image to be projected exceeds the current projection whole picture range of the projection device, if so, the user is prompted to increase the distance between the projection device and the plane of the projection picture, so that the size of the projection picture is increased. Specifically, under the same reference system, the corresponding result can be obtained by directly comparing the sizes of the areas.
Further, to extend the intelligent prompt function of the system, as shown in fig. 13, the user may perform interactive operation in the virtual image, that is, perform the split-splicing of the subdivided tiles in the virtual image. When a user needs to systematically prompt the position of a small tile corresponding to the complete tile, the user can place the tile in the right circular area of fig. 13; the circular area is provided with a systematically projected pattern for prompting a user to place, and the size of the projected pattern is matched with the size of the small block jigsaw (generally 0.5-3 times); the system projects the virtual image corresponding to the jigsaw in the corresponding position in the virtual image frame of the projection picture. Specifically, the system determines which position of the current complete jigsaw is the corresponding small block jigsaw through comparing with all small block jigsaw patterns corresponding to the jigsaw in the database by an image recognition algorithm, then determines the position of the small block jigsaw in the region corresponding to the virtual image and projects prompt information, and the system gives a corresponding prompt if the small block jigsaw does not belong to the small block jigsaw in the complete jigsaw.
In the interaction process, the virtual image is a template image, and the position of the small block jigsaw in the projection picture is determined according to the relative position of the small block jigsaw in the complete jigsaw and the position of the rectangular virtual frame graph in the projection picture. The size of the virtual image in the interaction process can be determined according to the proportional relation between the sizes of the small block puzzles and the complete puzzles.
Further, also as an example of auxiliary tiles, the latter differs in that the system recognizes the information comprised by the first recognition sub-module of the tile information block, and determines from this information that a corresponding template image, i.e. a template image of a size consistent with the size of the complete physical tile, needs to be projected. In this example, the template image shows the content of the entire tile, and the outline of each tile, and the user can compare the tile to be tiled with each tile in the template image to determine the location of the tile to be tiled in the entire tile.
The size of the template image in the projection picture can be calculated and determined according to the acquired image. Specifically, the method corresponds to the case that the corresponding virtual image in the summary is a template image. Points A1, A2, B1, B2 are consistent with the foregoing description, and their associated parameters may also be calculated by the foregoing scheme. In this embodiment, C1 and C2 are two end points of the edge where the template Image width is located, where C1C2 is a second feature line corresponding to the virtual Image, lc1c2_image2 represents the length of the second feature line in the template Image in the actual projection picture, and lb1b2_image2 is the length of the first feature line of the jigsaw information block in the actual projection picture; when a mosaic is determined, the length ratio of lb1b2_image2 to c1c2 in the actually projected template virtual Image is also determined, i.e. (lc1c2_image2/lb1b2_image2) =k2; then according to l1b2_image2= (l1a2_image2/l1a2_image1) l1b2_image1, y= (l1c2_image2/l1c2_image3) y_image3; y_image3 is the original resolution of the template Image Y axis; LC1c2_image3 is the pixel distance corresponding to C1C2 in the original template Image; the actual projected pixel size of the template image in the Y-axis within the projection screen can be calculated as Y.
The size of the template image in the projection picture can be determined by calculation according to the formula (5),
further, similar to the foregoing manner of prompting the tile jigsaw, the system may also identify the tile jigsaw selected by the user later, determine the corresponding position of the tile in the complete jigsaw, and project the corresponding prompting information.
Further, for this embodiment, before the detection of the physical object, the rectangular correction may be performed on the projection image, and then the geometric correction may be performed on the collected image or the camera position may be adjusted so that the projection image in the collected image is positive.
For the existing jigsaw, when the prompt is needed, a user generally searches for the complete jigsaw pattern paper by comparing, and for the older children, the comparison has a certain difficulty, and for the situation that the jigsaw blocks are particularly large, for example, when the jigsaw blocks are nearly hundred, even if an adult performs pattern comparison one by one in a plurality of jigsaw blocks, the comparison has a certain difficulty.
An intelligent jigsaw assistance system comprises an image acquisition unit, a projection unit and an information processing unit; the projection unit projects a projection picture, and the image acquisition unit acquires an acquired image comprising the projection picture and the jigsaw; the information processing unit determines the positions of a projection picture and a jigsaw area in the acquired image through analysis processing of the acquired image, and establishes a coordinate mapping relation between different positions under the reference system of the acquired image and under the reference system of the projection picture; the system detects the small block jigsaw placed in the jigsaw identifying area by the user, and performs image matching with all small block jigsaw data in the database to determine the relative position of the small block jigsaw in the complete jigsaw; according to the coordinate mapping relation, the system converts the relative position of the small block jigsaw in the complete jigsaw into the position coordinate in the jigsaw placement area in the projection picture; the system projects prompt information at the corresponding position coordinates in the projection picture.
As shown in fig. 13, first, a projection unit projects a projection screen including a tile-identifying area and a tile-placing area, wherein the tile-identifying area and the tile-placing area do not overlap each other; then placing small block puzzles in the puzzles recognition area by a user, collecting an collecting picture containing the small block puzzles by the collecting unit, and transmitting the collecting picture to the information processing unit; the further tile placement area can be determined based on the location of the tile that the user has placed, and the system determines the location of the area of the complete tile based on the placed tile and converts it to location information within the projected image. The information processing unit detects and collects images of small block puzzles in a puzzles identification area in a picture, and performs identification comparison on the images and puzzles in a complete puzzles to obtain the relative position of the small block puzzles in the complete puzzles, and according to the relative position, obtains the correct position coordinates of the small block puzzles in a puzzles placement area in a projection picture, and finally controls the projection unit to project prompt information at the correct position coordinates. In fig. 13, the middle dotted frame is a tile placement area, and the right circular frame is a tile identification area. And acquiring the relative position (2, 2) of the small block jigsaw in the complete jigsaw according to the image recognition, namely, the second row of jigsaw, the second column of jigsaw, acquiring the coordinates of the small block jigsaw in the jigsaw placement area according to the coordinate conversion of the jigsaw placement area in the projection picture, and then projecting the corresponding virtual image.
Second embodiment as shown in fig. 14, ground projection, desktop projection or wall projection can be adopted, the object module 10 is template paper, and the interaction process is that the projection module projects a virtual image to be copied on the template paper for copying reference of a user.
Specifically, the user may select a virtual template image to be rendered within the system database, on a web server, or sent to the system itself. According to the needs of a user, before performing projection related processing, the system can perform image preprocessing on the virtual template image, and the processed image is used as a virtual image to be projected. The original resolution of the virtual Image to be projected is denoted as x_image3X y_image3. The virtual image may be a regular complete rectangular image or an irregular partial area transparent image. When the virtual image is stored in a computer, the size of the virtual image is represented by the X-axis resolution and the Y-axis resolution, and the size of the virtual image in a projection picture can be uniquely determined by determining the pixel value displayed in the projection picture on the X-axis or the Y-axis and scaling down or scaling up. Here, the four vertices of the virtual template image are C1, C2, C3, and C4, and the corresponding second feature lines are C1C2, and C1C4.
As shown in fig. 14, the template paper is printed with key points 60, and preferably, the key points 60 are four black circular marks, and the reference rectangular area and the first characteristic line of the template paper can be determined according to the four key points.
In the present embodiment, the reference rectangular area is a display area of the virtual image. The position of the center of the marked image in the acquired image can be obtained through an image processing algorithm, such as a region segmentation algorithm or pattern template matching, and the like, and is marked as B1, B2, B3 and B4, and the detection methods of other marked images are not listed, so that equivalent replacement can be performed. The first characteristic line corresponding to the real object is B1B2 and B1B 4. A1, A2, A3 and A4 are key points in the projection picture, and the area where the projection picture is can be obtained through the processing of the acquired image, and the details are not repeated here.
Here, for the template paper without printed key points, the system can also directly acquire the corresponding area of the template paper in the acquired image by the image processing method and determine the corresponding rectangular reference area. The rectangular reference area is determined by the method of key point mark image detection, so that a more accurate result can be obtained.
When calculating the size of the virtual image, firstly calculating the size of the pixel value of the X axis of the obtained template image in the projection picture according to the second characteristic line C1C2,
And then calculating the pixel value of the X axis of the obtained template image according to the second characteristic line C1C4 as follows,
the preferable K3 value is 1, namely the size of a reference rectangular area to be drawn by a user is completely consistent with that of a template paper reference rectangular area; the K3 value represents the proportional relation between the real object module and the projected virtual image, and can be specifically set according to the requirement.
The information processing unit compares the sizes of X1 and X2, takes the smaller value of X1 and X2 as the pixel value of the X axis of the virtual image, and projects the virtual image corresponding to the pixel value in proportion.
In this embodiment, the relative position relationship between the virtual image and the physical module is determined to be overlay display according to the system setting or the second sub-module information of the physical module, and the size of the virtual image is determined by adopting the smaller pixel value in the foregoing result; the information processing unit acquires a reference rectangular area corresponding to the key point in the acquired image, calculates the coordinate of the geometric center of the reference rectangular area in the projection picture, and enables the geometric center of the virtual image to coincide with the geometric center of the reference rectangular area. Even if the paper moves relative to the projector, the system can update the position of the projected virtual image according to the position information of the reference rectangular area acquired in real time; the update position can be obtained by using a common real-time image target detection update mode or an image target detection and tracking update mode.
In the practical application process, such as handwriting and painting copy training, a user can combine template paper with an adaptive cushion block or pad; the relative position of the paper and the cushion is kept unchanged after the paper is placed and fixed, so that the template paper and the cushion block can be integrally treated as a physical module. Specifically, a cushion with a larger area than the template paper and provided with a placement area is adopted, and the template paper is placed on the placement area of the cushion; the surface area of the cushion which is not covered by the template paper is provided with a mark pattern so as to facilitate system detection and identification; the upper surface of the cushion is also provided with scale mark patterns, so that the relative position relationship between the paper and the cushion can be accurately positioned during image analysis and processing. When the template paper is used, the template paper is firstly placed and fixed on the cushion, the system can determine the corresponding reference rectangular area of the template paper according to the proportional relation between the whole cushion area and the template paper placement area by detecting the corresponding area and key position information of the cushion in the acquired image, and then subsequent calculation is performed.
In order to ensure the relative fixation between the template paper and the cushion, the cushion is provided with a template paper fixing structure such as a fixing clip, a clamping plate or a limiting rod, and the template paper fixing structure can at least fix one side of the template paper.
Preferably, the top edge and the left edge of the template paper are fixed, that is, the fixing device is arranged on the top edge side and the left edge side, so that the template paper can be firmly fixed, and the influence of the fixing device on writing can be avoided. The method has the advantages that specific patterns do not need to be printed on the paper, the system can be used for common paper and accurate results can be obtained, the system has higher universality and expansibility, and the printing cost is saved.
Further mat structure as shown in fig. 15, mat 80 includes an upper mat layer 81 and a lower mat layer 82, upper mat layer 81 being flip-press bonded to lower mat layer 82 such that a gap is formed between upper mat layer 81 and lower mat layer 82 in which template paper 90 is placed. The template paper 90 is fixed in the gap. The upper surface of the other side of the upper pad layer 81 away from the lower pad layer 82 is provided with a pattern layer 811, and the pattern layer 811 can be used as a marking pattern for image recognition. According to the related method of image processing, the system can determine the rectangular reference area corresponding to the whole cushion in the acquired image, determine the placement direction of the cushion through the marking pattern, and after the template paper is fixed, the area where the template paper is placed and the whole cushion area are kept relatively unchanged, so that the area range corresponding to the paper can be determined by determining the whole cushion area range in the acquired image and combining the scale information of the surface of the cushion detected or the relation parameters of the position of the corresponding fixed paper and the whole cushion area, and further determining the reference rectangular area corresponding to the template paper; in order not to interfere with writing, it is common to write from left to right and from top to bottom, with the gap where the pad is fixed being provided on top or left of the pad. The fixing device of the template paper can also be a loose clamping structure, as shown by a in fig. 16, the fixing device is a clamping rod 83 at the upper end of the cushion 80; as shown in b of fig. 16, the securing means is a clip 84 at the upper end of the pad 80. As shown in fig. 31, a limiting hole 91 is formed in the left side of the template paper 90, the fixing device of the cushion is a limiting rod 812, and the limiting hole 91 is matched with the limiting rod 812 to fix the template paper 90. As shown in fig. 32, in order to better fix the paper, the lower pad layer 82 is provided with a limit protrusion 813 for the perforated paper to be placed correspondingly, and the upper pad layer 81 is provided with a corresponding groove 814; the magnetic blocks with corresponding positions are arranged in the lower cushion layer 82 and the upper cushion layer 81, so that the pressure of the two cushion layers on paper can be increased, and the paper is prevented from sliding relative to the cushion layers; the clamping surface of the upper cushion layer 81 is provided with a convex anti-slip fixing strip 815, so that friction force is further increased, and the paper is prevented from sliding relative to the cushion layer; the protruding anti-slip fixing strips 815 of the upper cushion layer 81 are arranged corresponding to the thickness of the limit protrusions 813 of the lower cushion layer 82, so that the upper cushion layer and the lower cushion layer can be ensured to clamp paper when being closed.
In this embodiment, because the physical module needs to be placed by a user, in the placing process, a certain rotation angle may exist, which is not parallel to the side edge of the projection screen, so that a certain included angle exists between the side edge of the reference rectangular area in the template paper and the side edge corresponding to the projection screen, and if the virtual image is directly projected, the virtual image part may be located outside the template paper or inclined relative to the template paper, so that the user cannot copy normally.
In order to avoid the above situation, to ensure that the virtual image is matched with the template paper during projection, the following two ways can be adopted for implementation.
In the first mode, as shown in fig. 17, the system controls the projection picture to project crisscross grid-shaped datum lines for the user to place template paper for reference, and for template paper with rectangular reference area, the right angles of the edges of the template paper or the right angles of marks printed on the template paper are aligned with the right angles of the datum lines, so that the edges of the template paper are respectively parallel to the X axis and the Y axis of the projection picture. I.e. by projecting a reference line, the user is guided to correctly place the template paper.
In the second mode, as shown in fig. 14, the system rotates the virtual image, calculates the rotation angle of the template sheet on the projection screen, and displays the virtual image in the reference rectangular area of the template sheet after rotating the virtual image.
Specifically, as shown in fig. 14, the edges of the template sheet are not parallel to the X-axis and the Y-axis of the projection screen. In the collected image, the system calculates the included angle between B3B4 and A3A4 according to the coordinates of the bottom edge B3B4 of the template paper reference rectangular area and the bottom edge A3A4 of the projection picture; the angle rotation parameters under the projection picture reference frame and the acquired image reference frame are consistent; therefore, after the virtual image rotates around the central coordinate of the system by a corresponding angle in the projection picture, the system controls the projection picture to display the corresponding virtual image by combining the calculation result of the size and the position of the virtual image, and the virtual image is consistent with the placement direction of the template paper. Of course, the rotation angle of the virtual image can be calculated by calculating the included angles between the other sides of the template paper and the corresponding sides of the projection picture.
The second mode is adopted to facilitate the user to place template paper on the projection picture at will, even if the template paper moves and rotates in the projection picture, the system can calculate the position and angle again, and adjust the virtual image, so that the virtual image is always matched with the template paper.
In the actual application process, because the interaction process is that a user needs to trace a virtual image on the template paper by using a pen, because in the system, the top edge and the bottom edge of a projection picture are determined, and meanwhile, the top edge and the bottom edge of a pre-stored template image are also determined, but because the use direction of the user relative to the projection picture or the template paper is not fixed, the position relationship between the top edge and the bottom edge of the virtual image in the template paper is not fixed, the projection of the virtual image can involve the problem of the display orientation of the virtual image, and the orientation of the virtual image is the direction from the midpoint of the bottom edge of the virtual image to the midpoint of the top edge of the virtual image.
As shown in fig. 18, since the user needs to trace the virtual image on the template paper with a pen during the interaction, the bottom edge of the virtual image needs to be close to the user at this time, so that the user can face the virtual image when the user is at the position. In fig. 18, A1-A4 are four vertices of a projection screen, C1-C4 are four vertices of a virtual image, and according to the actual projection of the projector, the upper left corner of the projection screen is taken as A1, and the numbers of the projection screen are A2, A3 and A4 in sequence in a clockwise or counterclockwise direction; taking the upper left vertex of the pre-stored template image as C1, and sequentially numbering the pre-stored template image as C2, C3 and C4 according to the clockwise or anticlockwise direction. In fig. 18 a, wall projection is adopted, where the top edge of the projection screen is A1A4, the bottom edge is A2A3, the top edge of the default virtual image is C1C4, and the bottom edges are C2 and C3, and it can be seen that the virtual image is oriented in accordance with the projection screen. When desktop projection is adopted, a rectangular desktop is adopted for explanation, the projection picture corresponds to four sides, at this time, the top side of the projection picture is still A1A4, and the bottom side is A2A3, but when a user operates on different sides of the projection picture, the orientations of virtual images corresponding to the user are different. Specifically, as shown in b in fig. 18, when the user performs an operation near the side close to A2A3, the virtual image projected correspondingly is C1C2C3C4, the top side is C1C4, and the bottom side is C2C3; as shown in C in fig. 18, when the user performs an operation near the side A3A4, the virtual image projected correspondingly is C1C2C3C4, the top side is C1C4, the bottom side is C2C3, and the virtual image is rotated counterclockwise by 90 degrees in the projection screen, at this time, the bottom side C2C3 of the virtual image is near the user; as shown in d in fig. 18, when the user performs an operation near the side A1A4, the virtual image projected correspondingly is C1C2C3C4, the top side is C1C4, the bottom side is C2C3, and the virtual image is rotated 180 degrees counterclockwise in the projection screen; as shown in e of fig. 18, when the user uses the virtual image in the vicinity of the side A1A2, the virtual image projected correspondingly is C1C2C3C4, the top side is C1C4, the bottom side is C2C3, and the virtual image is rotated counterclockwise by 270 degrees in the projected screen. It can be seen that, under the premise that the positions of the projection picture and the object module are fixed, the bottom edge of the virtual image should be close to the user in order to ensure convenient operation of the user.
Taking wall surface projection as an example, in the interaction process, a user always faces the projection picture, when template paper is placed on the projection picture, the default projection picture is consistent with the virtual image, at the moment, four vertexes of the projection picture are marked as A1 from the top left corner vertex, and are marked as A2, A3 and A4 in sequence in the anticlockwise direction. Similarly, the key point at the upper left corner of the template paper in the projection picture is marked as B1, the rest key points are marked as B2, B3 and B4 in turn in the anticlockwise direction, and the four vertexes in the virtual image stored in the system are marked as C1 from the upper left corner and are marked as C2, C3 and C4 in turn in the anticlockwise direction. During projection, B1 is matched with C1, B2 is matched with C2, B3 is matched with C3, and B4 is matched with C4. When the wall surface is projected, the default projection picture of the system is consistent with the virtual image, the sequence of key points takes the upper left corner as a starting point, then the key points are marked in sequence in the anticlockwise direction, and the virtual image is projected normally according to the projection picture.
When adopting desktop projection or ground projection, because the projection picture is located on a horizontal plane, the user can operate on any side in the projection picture, at this time, in order to ensure that virtual images can be just opposite to the user, one scheme is that the system automatically judges the orientation of the virtual images, and then controls the virtual images to rotate in the projection picture. The present invention adopts the following method to determine the orientation of the virtual image.
1. As shown in fig. 19, the collected image is collected to form a projection screen and a region around the projection screen, the region corresponding to the human body and the template paper in the collected image is detected according to the image processing method, the region of the human body closest to the current template paper is determined, and the edge of the projection screen corresponding to the region of the human body is determined to determine the orientation of the virtual image corresponding to the template paper. More specifically, in fig. 19, the top edge of the projection screen is A1A2, the left human body is close to the edge A1A4 of the projection screen, and then the bottom edge of the virtual image on the template paper corresponding to the left human body should be close to the left side of the projection screen, i.e. the virtual image is rotated counterclockwise by 270 degrees; the right human body is close to the side A3A2 of the projection picture, and then the bottom edge of the virtual image on the template paper corresponding to the right human body is close to the right side of the projection picture, namely the virtual image rotates 90 degrees anticlockwise; the rest of the directions are analogized. After determining the orientation of the virtual image, the information processing unit controls the virtual image to correspondingly rotate and projects the corresponding virtual image on the template paper. The method comprises the steps of judging the position of a human body corresponding to template paper through image detection, and determining the orientation of a virtual image according to the side of a projection picture corresponding to the human body.
2. As shown in fig. 20, the object module is printed with a pattern with a direction, and the information processing unit determines the virtual image orientation according to the pattern; the orientation can be determined specifically through pattern template matching or image rotation characteristic comparison; and controlling the virtual image to rotate and then projecting a corresponding virtual image on the object module. In the method, the physical module is template paper with character patterns or blank template paper is fixed on a paper pad with the character patterns, the information processing unit determines the direction of the virtual image according to the direction of the character patterns, then rotates the virtual image according to the direction of the virtual image, and projects a corresponding virtual image on the physical module. The orientation of the text pattern is determined by comparing the features of the text image.
3. As shown in fig. 21, hollow marks representing a plurality of directions are provided on the template paper, the hollow marks corresponding to the directions to be determined are coated as solid marks by a user, the sensing module collects the image of the real object module, determines the orientation of the virtual image according to the directions of the solid marks, rotates the virtual image according to the orientation by the information processing unit, and then projects the virtual image on the template paper correspondingly. More specifically, as shown in fig. 22, the template paper is provided with open arrows indicating four directions as open marks, and the open arrows of the user in one direction are solid-coated to form solid arrows. The sensing module collects images containing the real object module, identifies solid arrows in the images, and determines the orientation of the virtual image corresponding to the template paper according to the directions of the solid arrows. The information processing unit rotates the virtual image according to the direction, and then projects the virtual image on the object module correspondingly. The method is that a user coats and rotates a certain direction as the direction of the virtual image by presetting a hollow mark with the direction. The hollow marks include, but are not limited to, hollow arrows, and the remaining hollow patterns that can identify four directions can also be used as hollow marks. The advantage of this approach over approach 2 is that the user is free to choose the orientation of the template paper rather than a relatively fixed orientation on the paper.
It can be seen that, in order to satisfy that the user can accurately trace the virtual image in different directions of the projection picture, before the virtual image is projected, the direction of the virtual image is determined by identifying the direction of the human body and the direction of the pattern of the object module in the collected image.
It should be noted here that, in general, the projection image is a rectangle, and there are four sides, each side corresponds to an orientation, and in particular, the actual projection image may be a polygon, for example, a regular hexagon, by stitching or clipping, so it is easy to think that each side corresponds to an orientation, and here, the orientation of the template paper corresponding to the virtual image may be determined by adopting the method described above.
As shown in fig. 23, another solution is to define placement areas with different orientations in the projection screen, where each placement area corresponds to a single and different orientation, and a corresponding reference direction can be projected in the placement area as a hint, and further, each placement area does not overlap with another placement area. After the user places the object module on the projection picture, the object module and the range of each placement area are detected according to the acquired image, the areas of the corresponding areas of the object module and each placement area are compared during judgment, the direction corresponding to the placement area with the largest overlapping area is taken as the direction of the virtual image, and then the virtual image is correspondingly projected on the object module according to the direction of the virtual image. In the scheme, the projection picture is divided into the placement areas, the virtual image orientation in the placement areas is fixed, then the placement area where the object module is located is judged, and the virtual image is correspondingly displayed on the object module according to the virtual image orientation in the placement area.
In the above scheme, one physical module corresponds to one virtual image, and in an actual application scene, there may be a case that one physical module corresponds to a plurality of virtual images, that is, one template paper corresponds to a plurality of virtual images. Taking the template paper of the copybook as an example, when the lines and rows of the virtual image copybook are correspondingly matched with the lines and rows defined on the template paper, the copybook can be projected according to the integral projection method. When the number of lines of the copybook is required to be projected word by word or is inconsistent with the number of lines of the template paper, a plurality of character virtual images are required to be corresponding to a single physical module, and as shown in fig. 20, a plurality of field character grids are included on the single template paper. The vertex of each field character lattice is a key point, the system detects a physical module in the acquired image, and recognizes each key point and a corresponding field character lattice area through image feature analysis and comparison, and the virtual image is a single character; each field character lattice area is a reference rectangular area corresponding to a to-be-projected object, and each virtual character image is a corresponding to-be-projected virtual template image. Specifically, the field character lattice corresponding to each key point can be determined by matching according to the pattern information of the cross lines in the field character lattice and the field character lattice outer frame lines; or for the known typesetting information of the paper grid of the system, the known typesetting information corresponding to the paper can be matched according to the template, and after the matching, each grid of the paper can be positioned.
Further here the field grid may also be in the form of other images representing boundaries, such as grid lines, circular lines, etc. And the system analyzes and compares the corresponding image characteristics, and determines a corresponding reference rectangular area according to key points or lines corresponding to the boundaries.
The virtual image may be a template image stored in advance in the system, and different text template images are projected in corresponding field grids according to the calculation methods of the size, the position, the rotation angle and the like of the virtual image.
The virtual template image can also be obtained after preprocessing according to the actual copybook template image. Firstly, carrying out image detection segmentation processing on an actual copybook, combining an OCR algorithm, segmenting out a corresponding image of each word in the copybook as an independent single-word virtual image, and carrying out optimization processing related to display effects such as enhancement, smooth denoising and the like on the extracted single-word image in order to obtain better projection effects by a subsequent method which is the same as the projection method of the single-word image; preferably, the corresponding area of each word in the copybook is determined according to the OCR algorithm, then the frame line of the corresponding word lattice around each word area is detected, and the corresponding virtual word template image is extracted by taking the frame line of the word lattice as a reference. When the system is used for projection, the system can project according to the original copybook sequence, and the user can select the corresponding text image in the copybook for projection. Specific implementations of OCR algorithms can be found in OpenCV 4 computer vision Python language implementation (ISBN: 9787111689485, [ Calf. Joseph Howse ], [ Aiqiao. Mi Niji Nor (Joe Minichino)), liu Bing, gao Bo.
In this embodiment, the physical module adopts template paper, the virtual image can be a dot-by-dot drawing, and the interactive process is that the user sequentially connects the marking points according to the projected digital marking points; the virtual image can be a simple drawing, and the interaction process is that a user traces lines according to the virtual image and then fills colors; the virtual image can be a copybook copied by the characters of the children, and the interaction process is that the user copies the virtual image of the corresponding copybook; the virtual image can also be a drawing-related game such as a maze chart or an interactive question. The virtual image can be one or a plurality of virtual images, and the virtual images are projected step by step and independently, such as continuous points are drawn, only two points are projected each time, and after the user connection is detected, the points to be connected in the next step are projected; similarly, character tracing can also adopt dynamic projection images of strokes, so that a user can conveniently determine the sequence and trend of the strokes. Compared with the traditional printing pattern tracing paper and copying drawing paper, the user can obtain richer copying materials, can randomly select copying patterns, and also reduces the printing cost.
Further, the system can evaluate the character writing quality of the user for the application of the character writing exercise. The specific system collects images written by a user in a character lattice through an image collecting device, and determines the content written by the user and the stroke sequence through image comparison before and after each stroke writing. According to the size range of the character grid in the acquired image, the image written by the user and the template character image can be scaled to a uniform size scale for comparison; the specific system can extract the text image written by the user from the collected image, and then uniformly zoom the text image to the projection picture reference system for comparison with the template text image. The system compares the image difference of the corresponding strokes in each writing stroke image of the user and the template virtual image, and gives corresponding scores preferably by comparing the coincidence degree of the corresponding stroke patterns in the two images. The system compares the stroke sequence actually written by the user with the standard stroke sequence data of the Chinese characters stored by the system, and marks the user writing image corresponding to the inconsistent strokes; and after the user finishes writing, the stroke dynamic image written by the user and the stroke dynamic image of the standard text are presented to the user for the user to learn and compare and reference. Further systems may build corresponding written data sets based on user ID data, particularly including lower scoring, higher scoring written data. And generating corresponding dynamic writing images for the writing data with higher scores for sharing by users. For writing data with lower scores, when a user uses the system again, the system identifies user ID information, words with lower scores corresponding to the previous IDs are arranged according to the wrong frequency or the parameters such as scores, the time of exercise and the interval of the current time, and the like, specifically, the words can be selected to be ordered singly or ordered by using a common multi-parameter weighting method, and then the words are projected onto template paper for the user to exercise; the settings may also be selected by the user at his discretion in the write training dataset.
The copy mode is that the copy is carried out after the painting and calligraphy are projected on the paper through projection; however, the position relation between the projection picture and the copy paper is adjusted manually, and once the position relation is adjusted, the projection and the copy paper are required to be unable to move; especially for hard pen handwriting copying, when the position of paper moves, manual adjustment is needed again, which is very inconvenient.
An intelligent painting and calligraphy copying device mainly comprises an image acquisition unit, a projection unit, a writing unit and an information processing unit; the writing unit is paper or paper fixed by the template paper fixing device; the image acquisition unit mainly acquires an image including a projection screen and a sheet. The device determines the area range of the paper or template paper fixing device in the collected image through analysis and processing of the collected image, and projects the area range of the picture; the information processing unit can determine a rectangular reference area range on the paper according to the area range of the paper or the template paper fixing device in the acquired image, and determine a projection picture reference rectangular area range according to the area range of the projection picture; and determining the position range of the rectangular reference area range on the paper in the projection picture, and controlling the projection unit to project a corresponding image in the corresponding position range.
An intelligent handwriting learning device mainly comprises an image acquisition unit, a projection unit, a writing unit and an information processing unit. The writing unit is paper printed with area division patterns, and the image acquisition unit acquires images comprising projection pictures and the writing unit in real time. The information processing unit analyzes collected images before and after writing in the divided areas corresponding to the writing unit by a user, and acquires images written on the writing unit by the user; the system compares the image written by the user with the template text image under the condition of enlarging and shrinking to a uniform size scale according to the position range of the dividing area of the writing unit in the acquired image; the system compares the image written by the user with the strokes corresponding to the template text image and gives a score.
In a third embodiment, the physical module adopts a racing car model, and the virtual image is corresponding racing car related element content. The interaction process is to control the racing model to move along the race track for the user. During the movement, the changing wake image can be correspondingly projected according to the change of the speed or the acceleration of the racing car model, or the cargo image can be correspondingly projected according to the width of the racing car.
In this embodiment, ground projection or desktop projection is generally adopted, a projection module projects a projection picture first, then a user places a racing car model in the projection picture, and a sensing module collects collected images including the projection picture and the racing car model.
The information processing unit identifies key points on the racing car model in the acquired image, determines a first characteristic line according to the key points, and determines the size of a virtual image of the racing car runway according to the first characteristic line. More specifically, as shown in fig. 24, four key points are located at four corners of the top surface of the racing model, the four key points are located in the acquired image by an image processing method, and a reference rectangular area corresponding to the key points is determined. Preferably, the key points may be specific marking patterns on the surface of the racing model, and different marking patterns may be provided on the head and tail of the racing model, so as to facilitate recognition of the head and tail of the racing model. The key point may be the center of a mark pattern, which may be a common circle, rectangle, etc., as shown by a in fig. 24, or the end points of a stripe structure, such as two end points of a patch, as shown by b in fig. 24.
Further, the reference rectangular region and the key point may be acquired by the following method. The information processing unit adopts an image difference method to compare acquired images with the racing car model and acquired images without the racing car model, removes noise areas and determines a reference area corresponding to the racing car model; an image target detection algorithm based on artificial intelligence can also be adopted to determine a reference area corresponding to the racing car model; for the racing car model pictures stored in the information processing unit, the corresponding reference area of the racing car model can be determined through an image template feature matching method. And taking the minimum circumscribed rectangle of the reference area as a reference rectangular area, taking the vertex of the reference rectangular area as a corresponding key point, and taking the corresponding side of the reference rectangular area as a first characteristic line.
And (3) according to the first application scene, determining a virtual image of the corresponding racing track according to the racing model in the acquired image. As shown, the racing model is placed within the collection range of the collected images and the projection module projects a virtual image of the racetrack of the corresponding size. In the acquired image, the racing car model comprises two key points B1 and B2, and a first characteristic line B1B2 can be obtained. Taking the example that the width of the runway in the virtual image of the projected racing track is 2 times of the width of the racing model, the actual multiple can be set according to the requirement, the second characteristic line corresponding to the width of the runway in the template image is C1C2, the original pixel number of C1C2 in the template image is obtained according to calibration, and the size of the virtual image in the projected image can be determined by combining the original resolution of the template image, the resolution of the projected image and the related parameters of the reference line of the projected image.
When displayed, the virtual image of the runway and the racing model have no specific display relationship, and the virtual image is preferably centrally displayed in the projection screen. In the same projection picture, when the sizes of the racing car models are different, the corresponding virtual image sizes of the racing car runways are also different.
When the virtual image is a simple runway graphic generated by the information processing unit, as shown in fig. 26, the resolution of the virtual image is the same as the resolution of the projection screen, the preferred setting graphic size is consistent with the projection screen size, and the internal runway width can be specifically set according to the requirement. Taking the width of the racing track as 2 times of the width of the racing model as an example, the information processing unit can directly calculate the pixel length of the width of the racing track in the projection picture, and then generate a runway graph with the corresponding width.
In the second application scenario, the information processing unit correspondingly projects wake images with different lengths according to the relative speed or the relative acceleration of the racing car model, and as shown in fig. 27, the width of the wake images is determined according to the length of the first characteristic line of the racing car model or the width of the racing car or according to user settings. Specifically, the width of the wake image is K4 times the first characteristic line, and K4 is a scaling factor that can be set according to actual conditions. From the foregoing, the information processing unit may determine a pixel length of the first feature line corresponding to the first feature line within the projection screen, thereby determining a width of the wake image within the projection screen.
The length of the wake image is determined according to the relative speed or the relative acceleration of the racing car model, the information processing unit detects the position information of the racing car model in real time according to the acquired image, and calculates the displacement of the racing car model in a certain period of time, so that the real-time speed or the acceleration of the racing car model is determined (in a period with shorter time, the racing car can be considered to do uniform speed or uniform acceleration movement and is calculated according to the setting), and the larger the relative value of the speed or the acceleration of the racing car model is, the longer the wake image is. The information processing unit can determine the length of the wake image according to the conversion relation between the preset reference speed or acceleration and the wake length. More specifically, the information processing unit takes a speed corresponding to a vehicle length distance of running within 1 second as a reference speed, the vehicle length is approximately represented by the length of a first characteristic line B1B4 as shown in fig. 27, and the length of a wake image corresponding to the reference speed is set to be half of the vehicle length, namely half of the first characteristic line B1B 4; therefore, according to the proportional relation between the speed of the racing car model in the acquired image and the reference speed, the length of the corresponding wake image in the acquired image at a certain speed can be determined, and according to the calculation method, the length of the corresponding wake image in the acquired image is converted into the length of the pixels in the projection picture, so that the length of the pixels of the wake image in the projection picture is determined. The wake length determination based on the relative acceleration is similar to the wake length determination based on the relative velocity described above, and will not be described in detail herein.
Furthermore, the method for determining the relative speed or the relative acceleration according to the real-time position information of the racing car model in the acquired image can be replaced by other commonly used position information acquisition devices. The UWB positioning device can be preferably used, the positioning base stations are arranged in four peripheral corner areas of the projection picture, positioning labels are fixed on the racing car model, and the positioning device can send position information to the system in real time through calibrating and calibrating coordinates, so that the system can analyze and calculate the relative speed and the acceleration of the racing car model. Compared with the method of image analysis, the method of using the professional positioning device has higher real-time performance and precision, but the cost is increased. Therefore, the method can be selected according to the needs in practical application.
It should be noted here that the calculation of the relative velocity and the relative acceleration is irrelevant to the selection of the measurement reference system, and the post-processing can be preferably normalized; the relative values of velocity and acceleration are measured under different reference frames and therefore are independent of the reference frame of the measurement, either an absolute or a relative reference frame. For example, taking a single velocity value correlation as an example, when the relative value of velocity and the relative value of wake length are linearly related and the coefficient of relationship is 1, when the velocity of the racing car is 1 cm per second, the corresponding wake length is 0.5 cm, the velocity of the racing car is 3 cm per second, and the corresponding wake length is 1.5 cm; if the speed of the racing car is analyzed through the collected images and is measured and represented by pixel values corresponding to the images, the speed value of the racing car is still three times, and therefore the length value of the wake is also 3 times; that is, the speed of the racing car and the length of the wake flow meet a certain correlation, a specific correlation calculation formula can be set according to the needs, and the correlation formula can be linear or nonlinear. Preferably, a linear correlation is generally chosen, i.e. the velocity is K5 times the reference value, and the length of the wake is also K5 times the reference value.
The shape of the wake image may be specifically designed according to the need, the length of the wake image is determined by the speed of the racing model, and the width is determined by the width of the racing model, and the shape of gradually shrinking is generally adopted. The width of the wake image refers to the maximum width of the wake image at the initial position. Different racing cars can also be matched with wake images of different styles; preferably, the corresponding wake patterns can be automatically matched according to different images presented in the acquired images of different racing car shapes or tag information on the racing car. The corresponding relation between the reference speed and the wake flow length can be set and adjusted according to the requirement. The method mainly reflects real-time state information of the racing car model through relative length change of the wake image, converts abstract speed or acceleration into a visual graph, and enhances ornamental and entertainment in use.
The position of the wake image is determined from the position of the third feature line of the racing model. Specifically, as shown in fig. 27, one mode of the wake image is an isosceles triangle including a bottom edge and two waists, according to the foregoing method, a reference rectangular area corresponding to the racing car model may be determined, since the key points B3 and B4 are located at the tail of the racing car, the B3 and B4 may be identified according to the acquired image, the edges of the key points B3 and B4 are used as the third feature line, the information processing unit sets a straight line where the bottom edge of the wake image is located in parallel or in superposition with the third feature line, and makes the center point of the bottom edge located on a midpoint line of the third feature line, and the wake image is displayed on the other side of the third feature line relative to the racing car model.
Further, for different models of racing vehicles, the system may automatically match different wake image template patterns, i.e., the system determines the pattern of the wake pattern from the first sub-module of the racing vehicle model. Specifically, different pattern styles corresponding to racing cars of different styles are preset in the system, the system determines an area corresponding to a racing car model in the acquired image, analyzes and compares the image of the racing car model corresponding to the area with the image features of the pre-stored racing car models of different styles, and determines the best matched object, so that the corresponding wake pattern style is determined. According to the foregoing, the system may also use the information tag identification means to determine tag information to be affixed to the racing car and to match the corresponding pattern.
And in the third application scene, the system projects a corresponding interactive image according to the size and the position of the racing car model. As shown in the figure, according to the difficulty selected by a user or set by the system, the system projects virtual images such as virtual goods, obstacles and the like in the racing field, and the size of the virtual images is determined according to the size of the racing car, and is determined according to the length of a first characteristic line of the racing car. As shown in fig. 28, the original resolution corresponding to the virtual image is obtained, and the ratio can be adjusted according to the actual situation according to the setting that the width of the virtual image is 0.8 times of the first characteristic line of the racing car. According to the calculation formula, the size of the virtual image in the projection picture can be obtained; the position of the virtual image may be set according to the game. Through detection and analysis of the acquired images, when the racing car moves to the virtual image, specifically, the area of the racing car reference area covers the virtual image area to reach a set proportion, and corresponding scores are counted by the system.
Conventional racing interactive projection games tend to be single in visual effect, and lack a visualization scheme for the movement state of the racing car, which is integrated with the movement of the racing car.
An intelligent racing car projection system comprises a position acquisition unit, a projection unit and an information processing unit; the system acquires the real-time position of the racing car in the field through a position acquisition unit; the information processing unit calculates real-time speed or acceleration information of the racing car according to the real-time position information; determining the length of a wake image in a projection picture according to the speed or the acceleration; the projection unit projects wake images with corresponding lengths to corresponding positions of the racing car in real time.
The position acquisition unit further comprises an image acquisition device, acquires images containing racing vehicles and projection picture areas in real time, determines the positions of the racing vehicles in the images, and calculates relative speeds or relative accelerations; and calculating the proportional relation of the relative speed or the relative acceleration to the reference value, so as to determine the relation between the real-time wake image length and the reference wake image length.
The position acquisition unit further comprises a UWB positioning device, the real-time position of the racing car is acquired through the UWB positioning device, and the relative speed or the relative acceleration is calculated; and calculating the proportional relation of the relative speed or the relative acceleration to the reference value, so as to determine the relation between the real-time wake image length and the reference wake image length.
In the fourth embodiment, the music interactive game is a favorite interactive game for children, and the traditional music interactive mode is that the children touch the mechanical musical instrument or the electronic musical instrument by means of flicking, striking and the like, so that the musical instrument sounds. Because of the limitations of the tone of the instrument itself, there is often a musical instrument that can only correspond to one pronunciation. In this embodiment, the user first places a physical module of a musical instrument, such as a physical drumstick, a music card, etc., then the projection module projects a virtual image of the relevant musical instrument correspondingly, then the user interacts with the virtual image by using the physical module or a hand, and the information processing unit determines an interaction mode of the user and the virtual image by analyzing the image content, and further determines multimedia content corresponding to the interaction mode in the virtual image.
As shown in fig. 29, for convenience of user operation, a placement area of the physical module and a display area of the virtual image are set in the projection screen. More specifically, the projection module projects the outline of the placement area in the projection picture, which is used for prompting the user to place the object module in the area, and is also convenient for detecting the object module in the acquired image.
Further, the system can pop up the interactive menu for the user to autonomously select the corresponding virtual image.
Further, the system can automatically determine the corresponding virtual image, after the physical module is placed in the placement area, the information processing unit determines the physical module in the collected image through an artificial intelligent image target detection algorithm or an image difference method or an image characteristic template matching method, and determines the content of the corresponding virtual image according to the type of the physical module. It should be noted that, by extracting the information of the first sub-module of the real object, the system can automatically match the content of the virtual image corresponding to the real object module, specifically, the pattern corresponding to the real object area in the acquired image is matched with the pre-stored pattern features.
In this embodiment, the virtual image is mainly a template image, the size of the virtual image is calculated as in the first embodiment, the first characteristic line length of the physical module and the reference line length of the projection screen are identified in the acquired image, and the pixel length of the virtual image in the projection screen is determined according to the resolution pre-stored in the template image, the pixel length of the reference line in the projection screen and the resolution in the projection screen.
In fig. 29, an induction chip is disposed in the drumstick, a virtual image ID corresponding to the drumstick and a proportional relationship between the virtual image size and the drumstick length are stored in the induction chip, the induction module identifies the induction chip information in the drumstick, the system detects the length of the drumstick in the acquired image as the length of the first characteristic line, and the size of the virtual image in the projection screen can be determined according to the proportional relationship between the drumstick length and the virtual image size and the related parameters of the reference line.
After the virtual image is displayed, the sensing module further collects an interactive image between the user and the virtual image, and the information processing unit analyzes the collected image to determine an interactive action mode of the user and an interactive position of the user and the virtual image. The interaction modes include common clicking, long pressing, tapping, stroking, etc. The user interacts in different location areas of the virtual image and the system may produce different responses. Further interactive content may be determined by determining the interaction pattern and the interaction location. As shown in fig. 29, the virtual image is a drum image, after being displayed, a user can use a physical module drumstick to knock the drum image, or can use the drumstick to scratch a picture, the knocking and the scratching correspond to different interaction action types, and when the action mode corresponding to the user is detected, the system can generate corresponding special effect sound. It should be noted that, the interaction mode needs to be preset in advance in the system, and the detection and identification of the image content are performed by analyzing the image features corresponding to different interaction modes, so as to determine the interaction mode actually corresponding to the current image.
To further increase the user interaction interests, the user is also supported in this embodiment to exercise using virtual musical instruments and to evaluate the exercise. The system can project a music score corresponding to the exercise of the musical instrument and play the reference music audio besides projecting the virtual image corresponding to the musical instrument. In the interaction process, a user can make corresponding actions on the virtual images of the musical instrument according to the music score, the system analyzes whether the interaction actions of the user at all time points are matched with the standard interaction actions set at all time points corresponding to the music score through collecting images, gives different scores according to the matching degree, and displays the stored user action images at all time points after the user finishes exercise. Preferably, the positions corresponding to the action images with lower scores and the music charts are displayed to the user.
A virtual music interaction system comprises an image acquisition unit, a projection unit and an information processing unit. The projection unit projects virtual musical instrument patterns, and the image acquisition unit acquires interactive action images of a user on a projection picture in real time; the information processing unit analyzes the action mode of the user and the corresponding position on the virtual musical instrument pattern; and determining corresponding multimedia response content according to the action mode and the position information, and outputting the multimedia response content through a projection unit.
In the fifth embodiment, the present embodiment is mainly combined with infant programming, which does not teach an infant how to write codes and write application programs, and does not cultivate an infant as a programmer, but allows an infant to program in a game by a visual and touchable physical module. In a specific game task process, the infants split the task, set paths, put modules, try and correct errors and achieve the task. The computational thinking behind programming is cultivated with hand practice.
Although infant physical programming has a relatively wide market demand, the prior art generally determines a corresponding programming instruction by collecting information on an information module or a card, and then the system controls the toy such as a trolley model, a mechanical arm, a lighting device and the like to respond correspondingly according to the instruction. It can be seen that in the prior art, a single card or a single information module only includes a single instruction, and for a complex scene, when a complex instruction is required, a combination of a plurality of cards or physical modules is required, which is inconvenient for expanding the instruction.
In this embodiment, an instruction expansion is performed by adopting a manner of interaction between a physical module and projection. Specifically, a user places a physical module in a projection picture, a system acquires an acquisition image containing the projection picture, a first characteristic line is determined according to the physical module in the acquisition image, and the size of a virtual image is determined according to the first characteristic line and a reference line in the projection picture. The position of the virtual image is relatively determined according to the position of the object module in the projection picture. The virtual image contains a plurality of instructions, and the user can further select the corresponding instructions by operating the physical module and then respond correspondingly according to the instructions.
In order to ensure that the content of the virtual image corresponds to the physical module, different patterns can be arranged on the outer surface of the physical module, and the corresponding virtual image is determined through pattern recognition on the outer surface; the sub-module can be built in the real object module, and the corresponding virtual image can be determined by means of radio frequency, chip induction and the like.
More specifically, as shown in fig. 30, the virtual image is a simple geometry generated by the system, a sector-shaped ring, and a plurality of instructions are equally distributed in the sector-shaped ring, and in fig. 30, different instructions are represented by different pattern contents. The inner diameter and the outer diameter of the sector-shaped circular ring are matched with the size of the object module. The information processing unit acquires the length of a first characteristic line of the object module in the acquired image and the length of a reference line of the projection picture, and determines the pixel length of the inner diameter and the outer diameter of the sector ring in the projection picture according to the pixel length of the reference line in the projection picture.
In this embodiment, the physical module preferably adopts a block structure such as a circle, a square or a rectangle, and according to the foregoing image processing method, the information processing unit first determines a reference rectangular area of the physical module by collecting an image, determines a circle center of an circumscribed circle and a radius of the circumscribed circle corresponding to the area, where any radius of the circumscribed circle is the first feature line. Preferably, the inner diameter of the sector-shaped circular ring is 1.1-3 times of the radius of the circumscribing circle of the object module, and the outer diameter is 1.1-4 times of the inner diameter. The length of the radius of the circumcircle of the object module is calculated in the acquired image and converted into the length of the pixel in the projection picture, so that the length of the pixel in the projection picture of the inner diameter and the outer diameter of the sector ring is determined. The proportion setting ensures that the virtual image is matched with the size of the real object module, and the relative size deviation is not too large, so that the user can observe and interactively use the virtual image.
When the position of the fan-shaped ring is calculated, firstly, determining the circle center of the physical module circumscribed circle of the physical module in the acquired image according to the acquired image, and determining the coordinate of the circle center of the physical module circumscribed circle in the projection picture reference system through coordinate conversion, wherein the circle center coordinate of the fan-shaped ring is consistent with the circle center coordinate of the physical module circumscribed circle.
As can be seen from fig. 30, the sector-shaped ring contains a plurality of independent instruction blocks, different instruction blocks represent different instruction contents, and a user can further select a corresponding instruction block by rotating the real object module. More specifically, when the user rotates the object module, the information processing unit determines the rotation angle and direction of the object module by collecting the image, if the user rotates the object module by 30 degrees in the anticlockwise direction, the current selected position on the sector ring is taken as the starting point, the circle center of the circumcircle of the object module is taken as the circle center, the starting point rotates by 30 degrees anticlockwise, the starting point is determined to fall into which instruction block, the pattern corresponding to the instruction block is amplified to indicate that the user rotates to select the instruction block, and finally the information processing unit responds according to the instruction corresponding to the instruction block, such as sounding, displaying graphics and the like.
Further, the candidate instruction blocks corresponding to the object modules are called through the identification of the object modules, and are displayed on the fan-shaped circular ring for further interactive selection of users. The physical module also represents an instruction, the instruction combination is formed by overlapping the instruction corresponding to the physical module and the selected candidate instruction block, and the system responds according to the instruction combination. Specifically, for example, the instruction corresponding to the physical module is an advance instruction, the instruction corresponding to the candidate instruction block corresponding to the physical module is 1 step, 2 steps, 3 steps and 4 steps, the user selects according to the needs, after the user selects the instruction block corresponding to the 2 steps, the instruction combination indicates the advance of 2 steps, and the system makes a corresponding response according to the instruction combination.
The virtual image of the sector-shaped ring may be a sector-shaped ring having a circumferential angle of greater than 0 degrees and less than or equal to 360 degrees. It is easily conceivable that other types of annular images may be applied to the present embodiment, and that other common arrangements such as a grid-like arrangement, and a line-like arrangement may be applied to the present embodiment.
An interactive programming system combining with objects comprises a projection unit, an image acquisition unit and an information processing unit. The image acquisition unit acquires an acquired image comprising a projection picture and a real object module; determining the area range of a projection picture and a real object module in the acquired image; the information processing unit determines the coordinates of the object module under the projection picture reference system through analysis and calculation; the system projects a corresponding annular image according to the coordinates, and different areas of the annular image represent different instructions; and the user selects the instructions corresponding to the different annular areas by rotating the real object module.
In the sixth embodiment, in this embodiment, the system first detects a physical module placed in the projection screen, matches a pattern of the physical module with a preset content, and determines a positional relationship between the projected virtual image and the physical module according to a result of the matching. The method specifically comprises the steps that a physical module is a preset jigsaw information block, and the placement relationship is a spaced display relationship in the first embodiment; the physical module is preset template paper, and the projection relationship is the coverage display relationship in the second embodiment; the real object module is a preset automobile model, and the projection relationship is the adjacent display relationship in the third embodiment; the physical module is a preset programming module, and the projection relationship is the surrounding display in the fifth embodiment. After the system automatically identifies the display relationship corresponding to the object module, corresponding processing is performed according to the projection method of the corresponding display relationship in the foregoing embodiments.
It should be noted that, in the description of the present invention, the directions or positional relationships indicated by the terms "upper", "lower", "front", "rear", "left", "horizontal direction", "vertical direction", etc. are directions or positional relationships based on the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or component to be referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (19)

1. A projection picture output method based on a real object module is characterized in that,
the real object module is used for interacting with the projection picture so as to assist game interaction, corresponding virtual images are determined according to the information of the real object module, the virtual images are template images stored in the system or acquired through transmission or patterns generated by the system independently, and the matching of the virtual images and the real object module is one-to-one matching or one-to-many matching of one real object module corresponding to a plurality of virtual images;
Collecting an collected image comprising a projection picture and a real object module;
positioning and extracting a reference line of the projection picture in the acquired image, acquiring the length of the reference line in the acquired image, and acquiring the pixel length of the reference line in the projection picture;
the reference line is a connecting line of key points on the projection picture or a side of a reference rectangular area corresponding to the projection picture, and the reference line is a line segment representing the relative size of the projection picture;
positioning and extracting a first characteristic line of the object module in the acquired image, and acquiring the length of the first characteristic line in the acquired image;
the first characteristic line is a connecting line of key points on the object module or an edge of a reference rectangular area corresponding to the object module, and the first characteristic line is a line segment representing the relative size of the object module;
determining the size of a virtual image in a projection picture according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image, the pixel length of the reference line in the projection picture and the proportional relation between the corresponding size of the virtual image actually projected and the corresponding size of a real object module;
Keeping the size and the position of the projection picture unchanged, and displaying the virtual image in the projection picture;
when the size of the projection picture is changed due to the change of the relative positions of the projection picture and the projector, calculating according to the image acquired after the position change, and determining the size of the virtual image in the projection picture.
2. The projection picture output method based on the object module according to claim 1, wherein the virtual image is a pre-stored template image with a second characteristic line;
the second characteristic line is a line segment corresponding to the virtual template image;
acquiring the original resolution of a pre-stored virtual image and the original pixel length of a second characteristic line;
the proportional relation between the corresponding size of the actually projected virtual image and the corresponding size of the real object module is the length proportional relation between the actually preset first characteristic line and the second characteristic line;
determining the pixel length corresponding to the second characteristic line of the virtual image in a reference frame by taking a projection picture as a reference frame according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image, the pixel length of the reference line in the projection picture and the length proportional relation between the first characteristic line and the second characteristic line, which are actually preset;
Determining the size of the virtual image in the projection picture according to the original resolution of the virtual image and the original pixel length of the second characteristic line, wherein the projection picture is the corresponding pixel length of the second characteristic line in the reference system; and keeping a projection picture unchanged, and displaying the virtual image in the projection picture.
3. The projection picture output method based on the real object module according to claim 1, wherein the virtual image is a generated geometric figure;
acquiring preset attribute parameters of the geometric figure;
the proportional relation between the corresponding size of the actually projected virtual image and the corresponding size of the real object module is the proportional relation between the size parameter of the geometric figure and the length of the first characteristic line;
calculating and determining the pixel length of the size parameter of the geometric figure in the projection picture according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image, the pixel length of the reference line in the projection picture and the proportional relation between the size parameter of the geometric figure and the length of the first characteristic line, keeping the projection picture unchanged, and displaying the virtual image determined by the attribute parameter and the size parameter in the projection picture.
4. The method for outputting a projection screen based on a physical module as claimed in claim 1, wherein,
the real object module comprises a first sub-module which can be identified, and the identification information of the first sub-module can determine the content, style, shape, original resolution or other relevant attribute parameters of the virtual image;
the first sub-module is an identifiable pattern or an identifiable structure or an induction chip of the physical module;
the proportional relation between the corresponding size of the actually projected virtual image and the corresponding size of the real object module is the proportional relation between the size parameter of the actually projected virtual image and the length of the first characteristic line;
according to the pixel length of the reference line, the length of the reference line in the acquired image; calculating the length of a projection picture pixel corresponding to the unit length in the acquired image; and determining the size of the virtual image in the projection picture according to the length of the first characteristic line in the acquired image, the length of the projection picture pixel corresponding to the unit length in the acquired image, the proportional relation between the size parameter of the virtual image actually projected and the length of the first characteristic line, and the original resolution of the virtual image.
5. The method for outputting a projection screen based on a physical module as claimed in claim 1, wherein,
When the reference line is a key point connecting line on the projection picture, the system displays a key point mark image at a corresponding coordinate position on the projection picture, and the coordinate distance between the key points, namely the pixel distance of the reference line, can be determined according to the coordinate position.
6. The method for outputting a projection screen based on a physical module as claimed in claim 5, wherein,
the key points are determined by directly detecting a mark image of a real object module or a projection picture in an acquired image, or by detecting a reference rectangular area corresponding to the real object module or the projection picture, or by firstly detecting a reference rectangular area corresponding to the real object module or the projection picture, and then detecting the mark image in a relevant range of the rectangular area; the reference rectangular area is determined by directly detecting or detecting an artificial intelligent image target algorithm or detecting the outline area of a real object module or a projection picture through three-dimensional image information.
7. The method for outputting a projection screen based on a physical module as claimed in claim 1, wherein,
the position of the virtual image in the projection picture is calculated and obtained according to the position information of the object module and the projection picture in the acquired image; the coordinate conversion parameters of the virtual image coordinate under the reference system of the acquired image and the reference system of the projection picture are determined by acquiring the position information of the reference point of the projection picture in the image, and the reference point is at least three points which are not on the same straight line.
8. The method for outputting a projection screen based on a physical module as claimed in claim 7, wherein,
the relative position relation between the position of the virtual image in the projection picture and the physical module is coverage display; and acquiring a reference rectangular area corresponding to the object module in the acquired image, calculating coordinates of four vertexes of the reference rectangular area in the acquired image, determining the coordinates of the geometric center of the reference rectangular area in the acquired image, converting the coordinates into coordinates in a projection picture, overlapping the geometric center of the virtual image with the geometric center of the reference rectangular area, and displaying the virtual image correspondingly.
9. The method for outputting a projection screen based on a physical module as claimed in claim 7, wherein,
the relative position relation between the position of the virtual image in the projection picture and the physical module is displayed at intervals; and acquiring a reference rectangular area corresponding to the object module in the acquired image, calculating the distance from the reference rectangular area to each side of the projection picture, and displaying the virtual image on one side of the maximum distance according to a preset interval distance.
10. The method for outputting a projection screen based on a physical module as claimed in claim 7, wherein,
The relative position relation between the position of the virtual image in the projection picture and the physical module is adjacent display; the virtual image is provided with a second characteristic line, and the real object module is provided with a third characteristic line; identifying a third characteristic line of the object module, and acquiring coordinates of the third characteristic line in the projection picture; and the second characteristic line of the virtual image is parallel or overlapped with the third characteristic line of the real object module, the midpoint of the second characteristic line is positioned on the midvertical line of the third characteristic line, and the virtual image is displayed on the other side of the third characteristic line relative to the real object module.
11. The projection picture output method based on the object module according to claim 7, wherein the relative positional relationship between the position of the virtual image in the projection picture and the object module is surrounding display; acquiring a reference rectangular area corresponding to the object module in the acquired image, and calculating the circumcircle center coordinates and the circumcircle radius of the reference rectangular area; the virtual image is a sector-shaped circular ring, the inner arc radius of the sector-shaped circular ring is 1.1-3 times of the radius of the circumscribing circle, and the outer arc radius is 1.1-4 times of the inner arc radius; and taking the circle center of the circumscribing circle as the circle center, and correspondingly projecting a virtual image according to the radius of the inner arc and the radius of the outer arc.
12. The projection screen output method based on the object module according to claim 11, wherein the object module can rotate around the circumscribing circle center; a plurality of areas are divided in the fan-shaped circular ring virtual image; and the system determines a selection area of the current user on the virtual image according to the included angle between the user and the initial position after rotating the object module, and makes a corresponding response.
13. The projection screen output method based on the object module according to any one of claims 1 to 7, wherein the position relationship between the object module and the projection screen in the acquired image is determined, and the object module is determined to be located in the projection screen or located outside the projection screen; determining an overlapping area of a projection picture and a real object module in the acquired image, considering the real object module to be in the projection picture when the area of the overlapping area exceeds a set threshold corresponding to the area of the real object module, and taking the overlapping area as an area corresponding to the real object module in subsequent processing; the real object module further comprises a second sub-module which can be identified, and the identification information of the second sub-module can determine the relative position relation between the virtual image and the real object module; the second sub-module is an identifiable pattern or an identifiable structure or an induction chip of the physical module; the relative positional relationship is one of overlay display, gap display, adjacent display, or surround display.
14. The projection screen output method based on a real object module according to any one of claims 1 to 12, wherein the orientation of the virtual image is determined according to the acquired image, and a corresponding virtual image is projected according to the orientation; the orientation of the virtual image is determined according to a mark pattern corresponding to the object module in the detected acquisition image, or according to a dividing area where the object module in the acquisition image is placed, or according to the relative position relationship between a user corresponding to the object module in the acquisition image and the projection picture side.
15. The projection picture output method based on the real object module according to any one of claims 1 to 12, wherein the width of the virtual image in the projection picture is determined by the first characteristic line of the real object module, the length is obtained by calculating the relative speed or the relative acceleration of the real object module, and the relative speed or the acceleration is determined by obtaining the position of the real object module in the acquired images at different times.
16. The method for outputting a projection screen based on a physical module as claimed in any one of claims 1 to 12, wherein,
the system firstly determines the coordinate mapping relation between coordinates in the collected image and coordinates in the projection picture, then converts the interaction position information in the collected image into position information in the projection picture, determines the interaction action mode of the user according to the collected image, determines corresponding multimedia content according to the interaction action mode of the user and the interaction position information in the projection picture, and correspondingly outputs the multimedia content;
The game interaction comprises jigsaw, handwriting, painting copy, racing, music and programming interaction.
17. The method for outputting a projection screen based on a physical module as claimed in claim 5, wherein,
firstly detecting a reference rectangular area corresponding to a real object module or a projection picture, and then detecting key points in a relevant local range corresponding to the rectangular area, so that false identification is avoided, and the accuracy of the position of the key points is improved;
the method also comprises the following steps: a1, judging whether the projection picture is rectangular, and if not, carrying out trapezoidal correction of the projection equipment; if yes, not processing; a2, judging whether a projection picture in the acquired image is rectangular, if not, carrying out geometric correction on the image, wherein the bottom edge of the projection picture in the corrected image is parallel to the bottom edge of the acquired image, and retaining correction parameters for subsequent coordinate conversion; if yes, no processing is performed.
18. A projection screen output system based on a real object module, for implementing the projection screen output method according to any one of claims 1-17;
the system comprises an image acquisition unit, a projection unit and an information processing unit;
the image acquisition unit acquires an acquisition image comprising a projection picture and a real object module and transmits the acquisition image to the information processing unit;
The information processing unit is used for positioning and extracting a reference line of the projection picture in the acquired image, acquiring the length of the reference line in the acquired image and acquiring the pixel length of the reference line in the projection picture;
the information processing unit is used for positioning and extracting a first characteristic line of the object module in the acquired image and acquiring the length of the first characteristic line in the acquired image;
the information processing unit determines the size of the virtual image in the projection picture according to a preset proportion according to the length of the first characteristic line in the acquired image, the length of the reference line in the acquired image and the pixel length of the reference line in the projection picture;
the information processing unit controls the projection unit to keep the size and the position of the projection picture unchanged, and displays the virtual image in the projection picture.
19. A projection screen output device based on a physical module, comprising the projection screen output system based on a physical module according to claim 18.
CN202211576391.2A 2022-12-09 2022-12-09 Projection picture output method, system and equipment based on object module Active CN115580716B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211576391.2A CN115580716B (en) 2022-12-09 2022-12-09 Projection picture output method, system and equipment based on object module
CN202310652554.9A CN116668653A (en) 2022-12-09 2022-12-09 Interactive projection method, system and device for moving objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211576391.2A CN115580716B (en) 2022-12-09 2022-12-09 Projection picture output method, system and equipment based on object module

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310652554.9A Division CN116668653A (en) 2022-12-09 2022-12-09 Interactive projection method, system and device for moving objects

Publications (2)

Publication Number Publication Date
CN115580716A CN115580716A (en) 2023-01-06
CN115580716B true CN115580716B (en) 2023-09-05

Family

ID=84590736

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310652554.9A Pending CN116668653A (en) 2022-12-09 2022-12-09 Interactive projection method, system and device for moving objects
CN202211576391.2A Active CN115580716B (en) 2022-12-09 2022-12-09 Projection picture output method, system and equipment based on object module

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310652554.9A Pending CN116668653A (en) 2022-12-09 2022-12-09 Interactive projection method, system and device for moving objects

Country Status (1)

Country Link
CN (2) CN116668653A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851234A (en) * 2015-10-09 2017-06-13 精工爱普生株式会社 The control method of projecting apparatus and projecting apparatus
CN107507247A (en) * 2017-08-28 2017-12-22 哈尔滨拓博科技有限公司 A kind of real-time dynamic autoization scaling method of projected keyboard
CN109242958A (en) * 2018-08-29 2019-01-18 广景视睿科技(深圳)有限公司 A kind of method and device thereof of three-dimensional modeling
CN109903391A (en) * 2017-12-10 2019-06-18 彼乐智慧科技(北京)有限公司 A kind of method and system for realizing scene interactivity
KR20200117685A (en) * 2019-04-05 2020-10-14 한국전자통신연구원 Method for recognizing virtual objects, method for providing augmented reality content using the virtual objects and augmented brodadcasting system using the same
CN111796715A (en) * 2020-06-24 2020-10-20 歌尔光学科技有限公司 Detection method and detection device of touch control light film

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851234A (en) * 2015-10-09 2017-06-13 精工爱普生株式会社 The control method of projecting apparatus and projecting apparatus
CN107507247A (en) * 2017-08-28 2017-12-22 哈尔滨拓博科技有限公司 A kind of real-time dynamic autoization scaling method of projected keyboard
CN109903391A (en) * 2017-12-10 2019-06-18 彼乐智慧科技(北京)有限公司 A kind of method and system for realizing scene interactivity
CN109242958A (en) * 2018-08-29 2019-01-18 广景视睿科技(深圳)有限公司 A kind of method and device thereof of three-dimensional modeling
KR20200117685A (en) * 2019-04-05 2020-10-14 한국전자통신연구원 Method for recognizing virtual objects, method for providing augmented reality content using the virtual objects and augmented brodadcasting system using the same
CN111796715A (en) * 2020-06-24 2020-10-20 歌尔光学科技有限公司 Detection method and detection device of touch control light film

Also Published As

Publication number Publication date
CN115580716A (en) 2023-01-06
CN116668653A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US9519968B2 (en) Calibrating visual sensors using homography operators
CN110232311B (en) Method and device for segmenting hand image and computer equipment
WO2019080229A1 (en) Chess piece positioning method and system based on machine vision, storage medium, and robot
US9424690B2 (en) Method for translating the location, orientation and movement of a predefined object into computer generated data
US7342572B2 (en) System and method for transforming an ordinary computer monitor into a touch screen
Hagbi et al. Shape recognition and pose estimation for mobile augmented reality
CN110610453B (en) Image processing method and device and computer readable storage medium
JP4253029B2 (en) Image processing method
JPH09512656A (en) Interactive video display system
WO2017128605A1 (en) Educational toy kit and hsv-based magic cube color recognition method therefor
CN106228195B (en) Seven-piece puzzle pattern recognition method based on image processing
JP2001101429A (en) Method and device for observing face, and recording medium for face observing processing
CN104656890A (en) Virtual realistic intelligent projection gesture interaction all-in-one machine
US20110050685A1 (en) Image processing apparatus, image processing method, and program
JP7042561B2 (en) Information processing equipment, information processing method
CN110478892A (en) A kind of method and system of three-dimension interaction
US9182813B2 (en) Image-based object tracking system in 3D space using controller having multiple color clusters
WO2018032630A1 (en) Teaching toy kit and method for identifying programming module by using color and counter
JP2019192022A (en) Image processing apparatus, image processing method, and program
JP6677160B2 (en) Information processing apparatus, information processing system, information processing method and program
CN115580716B (en) Projection picture output method, system and equipment based on object module
CN111258410B (en) Man-machine interaction equipment
JP4056891B2 (en) Three-dimensional position / attitude detection device, method, program, and recording medium
CN115309113A (en) Guiding method for part assembly and related equipment
CN114860072A (en) Gesture recognition interaction equipment based on monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant