CN112214117A - Image processing method and chip of air mouse, air mouse and system - Google Patents
Image processing method and chip of air mouse, air mouse and system Download PDFInfo
- Publication number
- CN112214117A CN112214117A CN201910618882.0A CN201910618882A CN112214117A CN 112214117 A CN112214117 A CN 112214117A CN 201910618882 A CN201910618882 A CN 201910618882A CN 112214117 A CN112214117 A CN 112214117A
- Authority
- CN
- China
- Prior art keywords
- image
- positioning
- air mouse
- pixels
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Position Input By Displaying (AREA)
Abstract
The invention discloses an image processing method for an air mouse, the air mouse applying the image processing method and an air mouse system. The air mouse system comprises an air mouse and at least two positioning marks with different characteristics, wherein the air mouse comprises a shell, an image acquisition unit and a processing unit; the image acquisition unit shoots a scene comprising a positioning mark and outputs a positioning image; the processing unit carries out image processing on the positioning image to obtain the feature code and the image displacement data of the positioning mark image, and the feature code is obtained according to the feature of the positioning mark image; the image processing method can increase the moving length of the air mouse by increasing the positioning mark, and the air mouse system has better performance in both moving precision and moving length.
Description
Technical Field
The invention relates to the field of computer and electronic game control, in particular to an image processing method for an air mouse, a chip adopting the image processing method, the air mouse, an air mouse system, an air mouse control system, the air mouse for intelligent mobile equipment, a driving program of the air mouse, a computer system applying the program, an infrared induction bar for providing a positioning mark for the air mouse and a cloud game system.
Background
The mouse is the most common control device for computers and electronic games, and is mainly used for controlling the movement of a cursor on a screen of a display device such as a television, a display and the like. The cursor broadly refers to various icons such as a mouse pointer on a screen, a weapon sight in an electronic game, a character, and the like, which move following the movement of the mouse.
The precision of the movement and the length of the movement are two important aspects for measuring the performance of the mouse. The movement precision refers to the minimum pixel number of the cursor on the screen which can be controlled by one-time unidirectional movement of the mouse; the movement length refers to the maximum number of pixels that the mouse can control the movement of the cursor on the screen once by moving in one direction.
The common optical mouse has higher level in the aspects of movement precision and movement length, and the optical mouse slightly moves on a desktop once, so that a cursor can only move 1 pixel on a screen; moving all the way in one direction, the cursor can be moved from one edge of the screen to the opposite edge. The optical mouse is used for controlling a screen with 1920 × 1080 or higher resolution, and can achieve both movement accuracy and movement length.
An air mouse is another device for controlling an on-screen cursor. Unlike an optical mouse, an air mouse controls the movement of a cursor on a screen by moving itself in the air.
Chinese patent application No. 201210319834.X discloses an air mouse using an infrared image recognition scheme, which emits infrared light through an infrared light emitter, senses the infrared light at a body through an infrared image sensor and forms a series of images, and light spots formed by the infrared light emitter are in the images. When the user holds the body and moves, the position of the light spot is changed in a series of formed images. The body sends position change data to a controlled device according to the position change of the light point in the adjacent image, and the controlled device controls a driven cursor on a screen to move according to the position change data.
Compared with an optical mouse, the air mouse adopting the infrared image recognition scheme cannot meet the requirement of mainstream display equipment in the movement length when higher movement precision is ensured.
Due to the limitation of image processing performance, the image resolution output by the infrared image sensor of the air mouse is low, for example, 320 × 240. Under the condition that the movement precision is 1 (namely, a single movement can control the cursor on the screen to move by 1 pixel at least), the body moves in one direction, the transverse movement length of the cursor on the screen does not exceed 320, and the longitudinal movement length of the cursor does not exceed 240. In contrast, the resolution of the mainstream display devices such as the conventional liquid crystal display and television is 1920 × 1080 or more. When the air mouse in the prior art is used for the display device, the transverse movement can control the cursor to move one sixth of the distance on the screen at most once; one longitudinal movement can control the cursor to move a quarter of the distance on the screen at most. Far from satisfying the needs. In order to meet the demand of the mainstream display device, the body portion can increase the movement length by enlarging the output ratio of the position change data. For example, the position of the measured light spot in the two adjacent frame images changes by 1 pixel, the screen on the screen is controlled to move by 6 pixels. At the moment, the transverse moving length can reach about 1920, and the use requirement can be met. However, the moving precision is reduced from 1 to 6, that is, the body moves once, and the minimum distance for the cursor to move on the screen is 6 pixels, and such moving precision can cause a very poor experience. Therefore, when the existing air mouse adopting the infrared image recognition scheme is used for a display device with mainstream resolution, the requirements on the moving precision and the moving length cannot be met at the same time, so that the application of the air mouse is greatly limited.
The infrared light emitter is equivalent to provide a positioning mark for the body of the air mouse, so that the infrared light emitter can determine the position of the infrared light emitter in the air at any time. However, only one infrared light emitter is adopted, which is equivalent to only one positioning mark, so that the air mouse cannot keep high level in both moving precision and moving length, and the moving precision is reduced by increasing the moving length.
To increase the movement length while maintaining the movement accuracy, it is necessary to increase the number of the index marks. The air mouse of the prior art cannot increase the moving length by increasing the number of the positioning marks. This is limited by the image processing method employed by the air mouse. The image processing method outputs position change data according to the position change of images formed by positioning marks (such as infrared light sources) in two adjacent frames of images, when only 1 positioning mark exists, two images for position comparison are formed by the same positioning mark, and the position change data of the images can reflect the moving direction and distance of the air mouse; when there are 2 or more than 2 positioning marks, the two images to be compared may be formed by different positioning marks, the position change data of the images cannot correctly reflect the moving direction and distance of the air mouse, and the movement of the cursor on the screen is controlled according to the position change data, which may cause the moving direction and distance to be different from the actual moving direction and distance of the air mouse.
Nintendo corporation has issued an infrared sensing strip that includes a strip-shaped support and 4 infrared light sources disposed on the support. The problem of using the infrared sensing strip to provide a positioning mark for the air mouse is that 4 infrared light sources arranged on the bracket are the same, which causes images formed by the 4 infrared light sources to be the same in the images output by the infrared image sensor of the air mouse. Therefore, although the infrared induction bar can provide 4 positioning marks, the moving length of the air mouse cannot be increased.
Smart mobile devices such as smart phones and tablet computers are mainly operated in a touch manner. This has limited the development of mobile games, as many types of games are suitable for mouse operation, such as FPS, MOBA, etc. When the game is operated on a smart phone or a tablet computer, the experience is poor because the difference between the touch screen operation mode and the mouse operation mode used by the player is too long, and the cursor cannot be controlled quickly and accurately.
The present invention has been made in such a context.
Disclosure of Invention
The invention aims to provide an image processing method for an air mouse. The air mouse adopting the image processing method can increase the moving length by adding the positioning mark.
The invention aims to provide an image processing method for an air mouse. The air mouse adopting the image processing method can increase the movement length without reducing the movement precision.
Another object of the present invention is to provide an air mouse which can increase a moving length by increasing a location mark.
Another object of the present invention is to provide an air mouse system with high moving precision and length.
Another object of the present invention is to provide an air mouse system that can satisfy the requirements of mainstream display devices (screen resolution of about 1920 × 1080) in both moving accuracy and moving length.
Another object of the present invention is to provide an image processing chip that can be used for an air mouse, so that the air mouse can increase the moving length by adding a positioning mark.
Another object of the present invention is to provide an air mouse control system for controlling a cursor on a display screen, which can increase the moving length of the cursor on the screen by increasing the positioning mark.
Another object of the present invention is to provide an air mouse control system for controlling a cursor on a display screen, which can be used in combination with a plurality of (two or more) positioning marks, and simultaneously achieve a high moving length and moving precision.
Another object of the present invention is to provide an air mouse driver installed on a controlled device connected to an air mouse, capable of controlling cursor movement on a screen according to a cursor control signal provided by the air mouse.
It is another object of the present invention to provide a computer system capable of controlling the movement of a cursor on a controlled screen according to a cursor control signal provided by an air mouse.
Another object of the present invention is to provide an air mouse for a smart mobile device, which can rapidly and accurately control a cursor and a game on a screen of the smart mobile device.
Another object of the present invention is to provide an infrared sensor strip for providing a position mark for an air mouse, which can increase the moving length of the air mouse without reducing the moving accuracy.
Another object of the present invention is to provide an infrared sensor strip, which can provide a positioning mark for an air mouse, so that the air mouse can have both moving length and precision.
Another object of the present invention is to provide a cloud game system in which the air mouse can be moved with an increased length by adding a pointing mark.
In order to achieve the above object, the present invention provides an image processing method for an air mouse. The method is characterized in that when a frame of positioning image is processed, the feature code and the image position data of the positioning mark image are obtained, wherein the feature code is determined according to the image feature of the positioning mark image.
In one embodiment of the image processing method of the present invention, after the feature codes and the image position data are acquired from the positioning image, whether the feature codes are the same or not is searched for in the feature codes acquired from the previous frame of positioning image.
In an embodiment of the image processing method of the present invention, after the feature codes and the image position data are obtained from the positioning image, the image position data corresponding to the same feature codes in the present frame and the previous frame of positioning image are compared to obtain image displacement data.
In an embodiment of the image processing method of the present invention, when a frame of localization image is processed, a feature code and image position data of one of the localization marker images are obtained.
In one embodiment of the image processing method of the present invention, when a frame of localization image is processed, feature codes and image position data of all the localization marker images are acquired.
The invention also provides an air mouse. The air mouse comprises a shell, an image acquisition unit and a processing unit. The image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; and the processing unit performs image processing on the positioning image and sends a cursor control signal to the controlled equipment according to an image processing result. When the positioning image is subjected to image processing, the characteristic code and the image position data of the positioning mark image are obtained, wherein the characteristic code is determined according to the characteristics of the positioning mark image.
In one embodiment of the air mouse of the present invention, the processing unit transmits the feature code and the image position data to the controlled device.
In an embodiment of the air mouse, after acquiring the feature code and the image position data from one frame of positioning image, the processing unit searches whether the feature code acquired from the previous frame of positioning image has the same feature code. If the image position data is the same as the image position data, sending the image position data corresponding to the feature code to the controlled equipment; and if the two signals are not identical, sending the rebranding signal to the controlled equipment.
In an embodiment of the air mouse, the processing unit obtains image displacement data according to the displacement of the positioning mark images with the same image characteristics in the two positioning images, and sends a displacement signal to the controlled device according to the image displacement data.
In one embodiment of the air mouse, the air mouse further comprises a key unit. The key unit is connected with the processing unit, and the key signal generated by the key unit is sent to the controlled equipment through the processing unit.
In one embodiment of the air mouse, the air mouse comprises a shell, an image acquisition unit, an image processing unit, a key unit and a main control unit. The image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; the image processing unit carries out image processing on the positioning image and sends an image processing result to the main control unit; the main control unit sends a cursor control signal to the controlled equipment according to the image processing result; the key signal generated by the key unit is sent to the controlled equipment through the main control unit; when the image processing unit processes the positioning image, the image processing method provided by the invention is adopted.
The invention also provides an air mouse system which comprises the air mouse and at least two positioning marks with different characteristics, wherein the positioning mark images formed in the positioning images by the positioning marks with different characteristics have different image characteristics.
In one embodiment of the air mouse system of the present invention, the position mark having different characteristics forms a position mark image in the position image, and the pixel values of the component pixels are different.
In one embodiment of the air mouse system of the present invention, the alignment marks having different characteristics form alignment mark images having different arrangement patterns of constituent pixels in the alignment image.
In one embodiment of the air mouse system of the present invention, the positioning marks with different characteristics form a positioning mark image with different numbers of constituent pixels in the positioning image.
The invention also provides an air mouse control system which comprises the air mouse and the controlled equipment. The air mouse sends a cursor control signal to the controlled device, and the controlled device controls the cursor on the driven screen to move according to the cursor control signal.
In one embodiment of the air mouse control system according to the present invention, the cursor control signal transmitted by the air mouse to the controlled device is image data (including a feature code and image position data). When the driver of the controlled device receives a group of image data, the movement of the cursor on the screen driven by the driver is controlled according to the change of the image position data corresponding to the same feature code in the group of image data and the previous group of image data.
In one embodiment of the air mouse control system according to the present invention, the cursor control signal sent by the air mouse to the controlled device is the image position data and the rebranding signal. And the driving program of the controlled equipment end processes the image position data and the label changing signal to obtain image displacement data and then controls the cursor on the driven screen to move.
In one embodiment of the air mouse control system of the present invention, the cursor control signal sent by the air mouse to the controlled device is a displacement signal, and the controlled device controls the cursor on the screen of the display device to move according to the displacement signal.
The invention also provides an image processing chip for processing the digital image. When the image processing chip processes the digital image, the image processing method is adopted.
The invention also provides a computer system which comprises the controlled device of the air mouse and a driving program.
The invention also provides an infrared induction strip which comprises a bracket and at least two infrared light sources with different characteristics.
In one embodiment of the infrared sensor strip of the present invention, the adjacent infrared light sources differ in brightness.
In one embodiment of the infrared sensor strip of the present invention, the adjacent infrared light sources have different light emitting areas.
In one embodiment of the infrared sensor strip of the present invention, the shape of adjacent infrared light sources is different.
In one embodiment of the infrared sensor strip of the present invention, the bracket is hollow to conveniently accommodate the infrared light source and the battery.
The invention also provides an air mouse for the intelligent mobile equipment, which comprises a shell, an image acquisition unit and a processing unit. The image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image containing a positioning mark image; the processing unit obtains a cursor control signal by performing image processing on the positioning image and sends the cursor control signal to the controlled intelligent mobile device (a smart phone or a tablet personal computer). When the processing unit carries out image processing on the positioning image, the image processing method is adopted.
The controlled devices in the air mouse control system of the invention are various types of electronic computer devices, including but not limited to PCs, television game machines, smart televisions, set top boxes, tablet computers, smart phones and VR devices.
Compared with the image processing method of the air mouse in the prior art, the image processing method of the air mouse further obtains the feature codes according to the image features on the basis of obtaining the image position data of the positioning marks in each frame of positioning images. The air mouse applying the image processing method can be matched with two or more than two positioning marks with different characteristics for use, and the purpose of increasing the movement length under the condition of not reducing the movement precision is achieved.
The image processing chip of the invention adopts the image processing method of the invention. The air mouse adopting the image processing chip can increase the moving length by increasing the positioning mark.
The air mouse system of the invention adopts two or more than two positioning marks. The movement length is increased without reducing the movement accuracy. When the air mouse is used for mainstream display equipment, the effect on the length and the precision of cursor movement is better, and the practicability is really achieved. The cursor performance is greatly improved, and meanwhile, the increased cost is little (only a positioning mark is added on hardware), so that the cursor control performance is higher, and meanwhile, the cost can be kept lower, and the method is favorable for popularization and application.
The air mouse control system is used for controlling the cursor on the screen of the display device driven by the controlled device to move, and greatly improves the moving length under the condition of not reducing the moving precision of the cursor. The requirements of mainstream display equipment (screen resolution is 1920 multiplied by 1020 and so on) are met.
According to the infrared induction strip, the adjacent infrared light sources have different characteristics, so that the positioning mark images formed in the positioning images have different image characteristics. When the mouse is matched with an air mouse for use, the air mouse can have higher movement length and movement precision at the same time.
Drawings
FIG. 1 shows an embodiment of the air mouse of the present invention.
FIG. 2 shows an air mouse and a manner of use of the present invention.
FIG. 3 shows an embodiment of the driver for the air mouse of the present invention.
FIG. 4 shows the signal flow of the air mouse and the air mouse control system of the present invention.
Fig. 5 shows an embodiment of the infrared sensor strip of the present invention.
Fig. 6 shows an embodiment of the infrared sensor strip of the present invention.
Fig. 7 shows an embodiment of the image processing method of the present invention.
Fig. 8 shows an embodiment of the infrared sensor strip of the present invention.
Fig. 9 shows an embodiment of the infrared sensor strip of the present invention.
Fig. 10 illustrates an embodiment of the infrared sensor strip of the present invention.
Fig. 11 illustrates an embodiment of the ir sensing strip of the present invention.
Fig. 12 shows an embodiment of the image processing method of the present invention.
Fig. 13 shows an embodiment of the infrared sensor strip of the present invention.
Fig. 14 illustrates an embodiment of an ir sensing strip according to the present invention.
Fig. 15 shows an embodiment of the image processing method of the present invention.
FIG. 16 shows an embodiment of the air mouse driver of the present invention
Fig. 17 shows an embodiment of the image processing method of the present invention.
FIG. 18 shows an embodiment of the air mouse and air mouse control system of the present invention.
FIG. 19 shows an embodiment of the air mouse and air mouse control system of the present invention.
Detailed Description
The invention will be further illustrated by means of specific embodiments in the following description with reference to the drawings.
Example one
The present embodiment provides an air mouse. As shown in fig. 1, the air mouse includes a housing 200, a button 201 provided on an upper surface of the housing 200, a lens 202 provided at a middle position of a front end of the housing 200, and a circuit portion inside the housing 200. The circuit part comprises a key circuit connected with the keys 201, an image sensor module connected with the lens 202 and a processing unit.
The lens 202 and the image sensor module together form an image acquisition unit, wherein the lens 202 is a filter lens, which can filter visible light and only allow infrared light to pass through. The image sensor module is an OV7620 image sensor module, integrates a CMOS image sensor and a DSP chip, and forms a complete infrared digital camera module with the lens 202, and can shoot infrared luminous objects and output digital images. The key 201 and the key circuit together form a key unit, and a key signal generated by the key unit is transmitted to a controlled device after passing through the processing unit. The key signal is used for realizing the key control function, and belongs to the prior art.
The processing unit selects an STM32F407 chip of an intentional semiconductor. The output end of the OV7620 image sensor module is electrically connected with the input pin of the STM32F407 chip, and the key circuit is electrically connected with the input pin of the STM32F 407. The processing unit is also used for bridging the controlled device. In this embodiment, the controlled device is a PC, and is connected to the STM32F407 through a USB cable. The controlled device drives a display device, and in this embodiment, the display device is a liquid crystal display with a resolution of 1920 × 1080. The processing unit can also be connected with and communicate with the controlled device in a wireless mode.
In order to realize the cursor control function, the air mouse needs at least 1 infrared light source as a positioning mark to help the air mouse to determine the position of the air mouse at any time. The infrared light source may be various objects capable of providing infrared light, including various infrared light emitters, candles, and the like.
In use, the OV7620 image sensor module takes an image through the lens 202, and continuously outputs the formed image to the processing unit (STM32F407) in the form of a digital signal, and the output image includes an image formed by a landmark (infrared light source), which is referred to as a landmark image. The image with the image of the positioning mark is called a positioning image. And the processing unit carries out image processing on the positioning image and sends a control signal to the controlled equipment according to the image processing result.
The OV7620 image sensor module is set to output 50 frames per second grayscale images with a resolution of 320 x 240. The gray image is a kind of digital image, and in each frame of the gray image, information of a pixel matrix of 320 × 240 is recorded. The pixel matrix has 240 rows and 320 columns, from top to bottom, which are 0 th row to 239 th row, and from left to right, which are 0 th row to 319 th row, respectively. The position of each pixel in the matrix is indicated by a unique coordinate. The first pixel in the upper left corner has coordinates (0, 0). Indicating that the position of the column is in the 0 th row and the 0 th column, wherein the first 0 is a row coordinate and indicates the row number of the column; the second 0 is the column coordinate, indicating the number of columns in which it is located. Each pixel in the pixel matrix has a pixel value, represented by a number from 0 to 255. Where 255 represents white, 0 represents black, and the remaining numbers represent varying degrees of gray.
The air mouse and the two positioning marks form an air mouse system, and the air mouse system has better performance. The two landmarks have different characteristics and the characteristics can be recorded in the scout image.
In this embodiment, the image capturing unit may also be an infrared image sensor as long as it can output a digital image.
The embodiment also provides an infrared induction strip which is used for providing reference for the displacement of the air mouse in the embodiment in the air. As shown in fig. 2, the infrared sensor strip includes a bracket 300, an infrared light source a301 disposed at the left end of the front surface of the bracket 300, and an infrared light source B302 disposed at the right end of the front surface of the bracket 300. Preferably, the support 300 has a rectangular shape with a length of 25CM, a height of 2CM, a thickness of 2CM, and a distance of 20CM between the 2 infrared light sources. Preferably, the 2 infrared light sources are infrared light emitting diodes, and are powered by an external power supply. The holder 300 may also be hollow and have a battery compartment for attaching a battery to power the infrared light source. The 2 infrared light sources on the bracket 300 are used as 2 positioning marks and form an air mouse control system together with an air mouse. Wherein, the infrared light source A301 is a positioning mark A, and the infrared light source B302 is a positioning mark B. When in use, the infrared induction strip is placed in front of a user at a distance of about 50 CM.
The two infrared light sources emit infrared light of different brightness. For example, the luminous flux of the infrared light source a301 is 5 lumens; the luminous flux of infrared light source B302 was 10 lumens. The positioning mark images formed by the infrared light source in the positioning images have different image characteristics and are reflected in different pixel values. For example, in the present embodiment, in the image formed by the infrared light source a301 in the positioning image, the pixel value of each pixel constituting the image is 50; the infrared light source B302 forms an image in the positioning image, and the pixel value of each pixel constituting the image is 100.
In the positioning image, only the pixel values of the constituent pixels of the positioning mark image are more than 50, so that whether the positioning mark image is found can be judged according to the information. In this embodiment, when the processing unit searches for the image of the positioning mark in the pixel matrix, it finds the pixel with the pixel value above 40, that is, it is determined that the image of the positioning mark is found.
The processing unit of the air mouse carries out image processing on the positioning image output by the image acquisition unit, and aims to obtain the feature code and the image position data of the positioning mark image, and the specific method in the embodiment is as follows:
and reading the pixel value of each pixel in the positioning image one by one, and recording the pixel value and the coordinate of the pixel when the 1 st pixel value is more than 40. The pixel value is a feature code, and the coordinate is image position data corresponding to the feature code. For example, when a pixel having coordinates (0, 5) is read, the pixel value is found to be 50, and 50 and (0, 5) are recorded. Where 50 is a feature code, and (0, 5) is image position data corresponding to the feature code. The image position data is actually the coordinates of 1 of the constituent pixels of 1 of the landmark images in the positioning image, and is used to represent the position of the landmark image in the positioning image.
Each time a frame of the scout image is processed, the processing unit sends a set of image data, including feature codes and image position data, to the connected PC.
The embodiment also provides a driving program, which is used on a controlled device (in the embodiment, a PC) of the air mouse and used for controlling the movement of the driven on-screen cursor according to the image data.
When the driving program receives a group of image data, the adopted processing method comprises the following steps:
and searching the same in the received last group of image data according to the feature codes. If the two groups of image data are the same, comparing the image position data corresponding to the feature codes in the two groups of image data to obtain image displacement data, and controlling the movement of a cursor on a screen according to the image displacement data; if no identity is found, the processing of the set of image data is ended. The processing method is shown in fig. 3.
The components and signal transmission modes of the air mouse and the air mouse control system in this embodiment are shown in fig. 4.
The air mouse and the image processing method according to the present embodiment will be described in detail below, taking the lateral movement of the air mouse as an example.
The driver program of the PC end is preset with a recording area for recording the image data (including the characteristic code and the image position data) transmitted from the air mouse, and when the processing of each group of image data is finished, the data in the recording area is updated.
The user holds the air mouse with the lens 202 facing the landmark a (infrared light source a 301). The image acquisition unit starts to capture and output a 1 st frame positioning image in which the pixel values of four pixels having coordinates (120, 318), (120, 319), (121, 318), (121, 319) are 50 and the pixel values of the remaining pixels are 0. The image composed of these four pixels is formed by the infrared light source a, and is called as the image of the positioning mark a.
The frame positioning image is transmitted to a processing unit in the form of a digital signal, and the processing unit performs the following image processing:
and reading the pixel value of each pixel in the pixel matrix one by one according to the sequence from left to right and from top to bottom. The specific method is that starting from a pixel with coordinates of (0, 0), the pixel value of each pixel in the first row of pixels in the pixel matrix is read in the order from left to right, and then the second row, the third row and the last row are read.
When a pixel having coordinates (120, 318) is read in the above order, the read pixel value is 50. Then 50 and (120, 318) are recorded (because the pixel value of the pixel is greater than 40), where 50 is the feature code and (120, 318) is the image location data corresponding to the feature code.
The processing unit sends 50 and (120, 318) to the connected PC, and the driver on the PC side looks for the same in the recording area according to the feature code 50. Since this is the first set of image data processed and there is no record in the recording area, after 50 and (120, 318) are stored in the recording area, the whole image processing process is finished.
The user holds the air mouse and moves right continuously, the pixel values of four pixels with coordinates (120, 317), (120, 318), (121, 317), (121, 318) in the second frame positioning image output by the image acquisition unit are 50, and the pixel values of the rest pixels are 0.
The image data acquired by the processing unit from this frame positioning image are 50 and (120, 317).
The driver finds the same in the recording area according to the feature code 50. Comparing (120, 317) with (120, 318) finds that the value of the column coordinate is reduced by 1, and the driver obtains the image displacement data. The PC controls the cursor on the LCD screen driven to move 1 pixel to the right (screen facing the user).
The user holds the air mouse and moves right continuously, the image acquisition unit continuously shoots and outputs positioning images, the position of the image of the positioning mark A gradually moves left in a series of output positioning images, the processing unit continuously outputs image data according to the image processing result, and the PC controls the cursor on the screen of the liquid crystal display to move right continuously according to the image data.
When the image acquisition unit outputs the nth frame positioning image, the processing unit obtains the feature code 50 and the corresponding image position data (120, 0) from the nth frame positioning image, and at this time, the positioning mark a has reached the edge of the visible range of the lens 202. After the frame alignment image processing is completed, the data in the recording area of the driver is 50, (120, 0).
After the user holds the air mouse and moves a distance to the right, only the positioning mark B is arranged in the lens 202. At the moment, the image acquisition unit outputs an n +1 th frame positioning image, wherein the pixel values of four pixels are 100, and the pixel values of the rest pixels are 0. The four pixels constitute the landmark B image, and the coordinates of the four pixels are (120, 317), (120, 318), (121, 317), (121, 318), respectively.
The processing unit obtains feature codes 100 and corresponding image position data in the frame alignment image (120, 317). The driver on the PC side does not find the same in the recording area according to the feature code 100. After the data in the recording area is updated to 100, (120, 317). The processing of the group of image data is finished.
After the user holds the air mouse and moves a little distance to the right, the pixel value of 4 pixels in a frame of positioning image output by the image acquisition unit is 100, and the pixel values of the rest pixels are 0. The coordinates of the 4 pixels are: (120, 316), (120, 317), (121, 316), (121, 317). After the processing unit performs image processing on the frame positioning image, the image data sent to the PC end are 100 and (120, 316), the driver finds the same in the recording area according to the feature code 100, the (120, 316) and (120, 317) are compared, the obtained image displacement data is that the numerical value of the column coordinate is reduced by 1, and the PC controls the cursor on the driven liquid crystal display screen to move to the right by 1 pixel. The user holds the air mouse and moves to the right, and the cursor on the screen also moves to the right until the positioning mark B moves out of the visible range of the lens 202.
In the air mouse of the present embodiment, the cursor can be controlled by 1 positioning mark within the visible range of the lens 202. Adding a landmark increases the length of the move, which depends on the distance between two landmarks. The more reasonable place distance of the positioning marks is that when the air mouse is positioned between the two positioning marks, the two positioning marks are both in the visible range of the lens 202, but are positioned at the edge, so that in the process of transverse movement, the continuity of cursor control can be ensured, the longest movement length can be realized, and at the moment, the movement length close to 320 pixels can be increased by adding 1 positioning mark.
The beneficial effect of the air mouse of the embodiment comes from the adopted image processing method. In the process of moving from left to right, the air mouse sequentially takes the positioning mark A and the positioning mark B as reference objects to determine the position of the air mouse. When the reference object is switched from the index mark A to the index mark B, the cursor on the screen has a very short pause and then moves to the right. If the air mouse in the prior art is used, the positions of the positioning mark images in the front and rear positioning images are simply compared, and when a reference object is switched, the moving direction of a cursor on a screen is opposite to that of the air mouse. For example, when the positioning mark a is taken as a reference object, the last frame positioning image is generated, and the image position data is (120, 0); after moving slightly to the right, the first frame positioning image generated by taking the positioning mark B as a reference object has image position data of (120, 317), and the two image position data are compared to output a displacement signal, so that the cursor on the screen moves 317 pixels to the left.
When the air mouse of the embodiment performs image processing on each frame of positioning image, the air mouse further obtains the feature code according to the image feature on the basis of obtaining the image position data. The driver of the PC end obtains necessary information, and only the positions of the same positioning mark in the two frames of positioning images are compared, so that the movement length can be increased by increasing the positioning mark.
When the driver compares the image position data, the row coordinates and the column coordinates are compared respectively, and the change of the values, including four changes, namely, the increase of the row coordinates, the decrease of the row coordinates, the increase of the column coordinates and the decrease of the column coordinates, is found out. The driver of this embodiment controls the movement of the cursor on the screen according to the image displacement data obtained by comparing the image position data, and the adopted rule is as follows: if the numerical value of the row coordinate is increased by n, controlling a cursor on the screen to move n pixels upwards; if the value of the row coordinate is decreased by n, the cursor on the control screen is moved down by n pixels. If the numerical value of the column coordinate is increased by n, controlling a cursor on the screen to move n pixels to the left; if the value of the column coordinate is decreased by n, controlling the cursor on the screen to move to the right by n pixels; if the numerical values of the row coordinate and the column coordinate are changed, the cursor is controlled to move correspondingly in two directions. For example, if the measured values of the row coordinate and the column coordinate are increased by 1, the cursor on the control screen is moved up by 1 pixel and then moved left by 1 pixel.
By adopting the rule, the moving direction of the cursor on the screen can be ensured to be consistent with the moving direction of the air mouse. The rules may also be changed to: if the numerical value of the column coordinate in the obtained image displacement data is increased, controlling the cursor to move leftwards; the numerical value of the column coordinate decreases, and the cursor is controlled to move rightward. Thus, when moving transversely, the moving direction of the cursor on the screen is opposite to the moving direction of the air mouse. The direction of longitudinal movement can also be modified according to the same method.
In an implementation manner of the driver of this embodiment, the driver multiplies the image displacement data by 2 and outputs the result, for example, if the measured value of the column coordinate is decreased by 1, the cursor on the screen is moved to the right by 2 pixels. Thus, the air mouse can control the cursor on the screen to move for a longer distance by a shorter moving distance.
The image displacement data is multiplied by 3 and then output, and then the air mouse system consisting of the air mouse and the two positioning marks can basically meet the requirement of controlling the screen with the mainstream resolution. At this time, the lateral movement length of the air mouse can be approximately 1920, and the movement precision is 3. If the moving length of the air mouse in the longitudinal direction needs to be increased, the infrared induction strip is vertically placed, and the two infrared light sources are arranged up and down.
Besides translation, the rotation of the air mouse in the air can also control the movement of a cursor on a screen, and the rotation towards the left or the right can control the transverse movement of the cursor on the screen; turning up or down may control the cursor on the screen to move longitudinally. Because the rotation can also change the position of the positioning mark image in the positioning image output by the image acquisition unit.
In the invention, the distance between the air mouse and the infrared light source is preferably 50CM to 100CM, and too close to the distance can make the positioning mark image in the image too large to be positioned.
In one embodiment of the infrared sensor strip of this embodiment, the length of the rectangular support is 45CM, and 3 infrared light sources are disposed from left to right on the front surface of the support. The 3 infrared light sources are arranged in a row with a distance of 20CM between each light source. The infrared light sources on the left and right sides have the same brightness, while the light sources in the middle have different brightness. For example, the light flux of the infrared light source on the left and right sides is 5 lumens, and the light flux of the infrared light source in the middle is 10 lumens. The 3 infrared light sources are equivalent to 3 positioning marks, and an air mouse system is formed by the 3 infrared light sources and the air mouse, so that the moving length of the air mouse can be further increased.
In one embodiment of the infrared sensor strip of this embodiment, the support 300 is rectangular, 25CM long and 8CM high. The 4 corners of the holder are provided with 4 infrared sources of different brightness, as shown in fig. 5. Preferably, the luminous flux of the 4 infrared light sources is 5 lumens, 10 lumens, 15 lumens and 20 lumens respectively. The method provides 4 positioning marks for the air mouse, positioning mark images formed by any two adjacent positioning marks in a positioning image have different characteristics (pixel values), and compared with the air mouse with only 1 positioning mark in the prior art, the air mouse system formed by the 4 infrared light sources and the air mouse can increase the moving length when the air mouse moves transversely, longitudinally and obliquely in the air. The holder 300 in this embodiment may also be a frame-shaped structure with 1 infrared light source at each of the 4 corners and 4 brightness levels for the 4 infrared light sources. As shown in fig. 6. Preferably, each side of the frame has a width of 2 CM.
In an implementation manner of the air mouse in this embodiment, the processing unit applies the following image processing method to the positioning image:
s1, reading pixel values of all pixels in the positioning image, and taking the pixels with the pixel values larger than 40 as alternative pixels.
And S2, selecting a pixel with the minimum coordinate value from all the alternative pixels, recording the pixel value of the pixel as a feature code, and recording the coordinate as image position data. The selection method of the pixel with the minimum coordinate value is that the pixel with the minimum row coordinate value is selected from all the alternative pixels, and if only 1 pixel exists, the pixel is the pixel with the minimum coordinate value; if there are more than one, then the column coordinate value selected among the more than one is the smallest.
S3, removing all pixels with the same pixel value as the minimum pixel of the coordinate values and the minimum pixel of the same coordinate values from the alternative pixels, and if the alternative pixels exist, continuing the processing from S2; if not, the processing of the frame alignment image ends.
After the 1 frame positioning image is processed according to the image processing method, the processing unit can acquire the image data of all the positioning mark images. The image processing method is shown in fig. 7. After the image processing of each frame of positioning image is finished, the processing unit sends all the acquired image codes and image positions to a connected PC as a group of image data.
At this time, in each set of image data received by the driver at the PC end, there may be 1 or more feature codes and corresponding image position data. Under the condition that more than 1 feature code exists, one feature code is arbitrarily extracted from the feature codes, the same feature code is searched in the recording area, if the same feature code is found, the image position data corresponding to the feature code is compared with the image position data corresponding to the feature code in the recording area, and a cursor on a screen is controlled to move according to the comparison result; if the same feature code is not found, another feature code is extracted and the search is continued until the same or all feature codes are found.
The image processing method and the driver are exemplified below.
When the image acquisition unit shoots the infrared light sources A and B at the same time, the 1 st frame positioning image is output. Wherein, there are 4 pixels with a pixel value of 50, and the coordinates are (120, 0), (120, 1), (121, 0), (121, 1); there are 4 pixels having a pixel value of 100, and the coordinates are (120, 318), (120, 319), (121, 318), (121, 319), respectively.
The processing unit performs the following image processing on the frame positioning image:
s1, reading pixel values of all pixels in a positioning image, and finding 8 spare pixels in total. The coordinates are (120, 0), (120, 1), (121, 0), (121, 1) (120, 318), (120, 319), (121, 318), (121, 319), respectively.
And S2, selecting a pixel with the minimum coordinate value from the candidate pixels, wherein the coordinate of the pixel is (120, 0). The feature code 50 and the image position data (120, 0) are entered into the image data.
And S3, after the pixel and all pixels with the pixel values of 50 are removed from the candidate pixels, 4 candidate pixels are remained, wherein the pixel values of the 4 candidate pixels are 100, and the coordinates are (120, 318), (120, 319), (121, 318), (121, 319), respectively.
Returning to S2, a pixel having the smallest coordinate value is selected from the remaining 4 candidate pixels, the coordinate of which is (120, 318), and the feature code 100 and the pixel position data (120, 318) are entered into the image data.
After the pixel and all pixels with the pixel value of 100 are removed from the alternative pixels, no alternative pixel exists, and the image processing process of the frame positioning image is finished. The processing unit sends the 1 st set of image data to the connected PC with feature code 50 and corresponding image position data (120, 0), and also feature code 100 and corresponding image position data (120, 318).
After receiving the set of image data, the PC driver first searches the recording area according to the feature code 50. The same was not found; and then looked up in the recording area according to the feature code 100. If no identity is found, the process ends and the set of image data is recorded in the recording area.
The image acquisition unit sends out 2 nd frame positioning image, and processing unit sends 2 nd group image data to PC after carrying out image processing. There are feature codes 50 and corresponding image displacement data (120, 1), and also feature codes 100 and corresponding image displacement data (120, 319). The driver finds the same in the recording area based on the feature code 50, compares (120, 1) with (120, 0), and finds that the value of the column coordinate is increased by 1. The cursor on the screen is controlled to move 1 pixel to the left, ending the process, and changing the data in the recording area to the 2 nd group of image data.
In an implementation manner of the air mouse of the embodiment, the image capturing unit may capture a visible light image, for example, a lens associated with the image sensor module is changed into a lens capable of passing visible light. The visible light source can also act as a location marker. For example, a light emitting diode may be used as a location marker.
The method comprises using two light emitting diodes as positioning marks, wherein the luminous flux of one light emitting diode is 5 lumens, and the luminous flux of the other light emitting diode is 10 lumens.
3 LEDs were used as position markers, one of which had a luminous flux of 5 lumens and the other two of which had a luminous flux of 10 lumens. When in use, the luminous flux is put in the middle with 5 lumens.
4 light-emitting diodes with different brightness are used as positioning marks, and when the positioning marks are placed, for example, 4 light-emitting diodes are placed at 4 corners of 1 display according to the placing mode in fig. 6, and when the positioning marks are used, the air mouse is aligned with the display.
The image processing unit of the air mouse of the present invention can process higher resolution images, such as OV7620 image sensing output black and white digital images with a resolution of 640 × 480, 30 frames per second. The positioning image with the resolution ratio is processed, so that the air mouse can have higher movement length under the condition of unchanged movement precision.
In an embodiment of the air mouse of the present embodiment, the air mouse further includes a cursor movement control key, which is disposed on the surface of the housing and connected to the processing unit through a circuit. For controlling whether the cursor on the screen moves along with the air mouse, for example, the user often needs to move the air mouse to the middle position of the effective area and does not want the cursor on the screen to move along with the air mouse. When the cursor movement control key is pressed, the processing unit stops outputting the cursor control signal, releases the key and resumes outputting the cursor control signal. Preferably, the key is located on the housing in a position suitable for depression by a thumb.
In this specification, a cursor controlled by an air mouse broadly refers to various icons such as a mouse pointer on a screen, a weapon sight in an electronic game, and a character that move in accordance with the movement of the air mouse. The cursor may be of different sizes and even include the entire screen, for example, the air mouse may be used to control the view of a character in a game, the entire screen may be the field of view, and the display of the entire screen may change as the air mouse moves.
The air mouse and the controlled device in the embodiment jointly form an air mouse control system. The controlled device is various computer devices including but not limited to a PC, a television game machine, a smart television, a set-top box, a tablet computer, a smart phone and a VR device. The air mouse outputs image data (including code characteristics and image position data), and the controlled device controls the cursor on the screen of the display device to move according to the image data.
When the air mouse is used for controlling the smart phone or the tablet computer, the processing unit outputs image data, and the smart phone or the tablet computer controls cursor movement on a screen according to the image data.
Example two
The present embodiment provides an air mouse, which is different from the air mouse in the first embodiment in the method for acquiring image data. The processing unit of the air mouse of the embodiment adopts the following processing method for the positioning image:
the pixel values of each pixel are read one by one in the preselected area until a pixel having a pixel value above 40 is found. Then, the pixel values of all pixels in the range of 7 pixels × 7 pixels are read with the pixel as the center, and the pixels in which all the pixel values are 40 or more are found.
For example, if the coordinates of the found pixel with the 1 st pixel value above 40 are (X, Y), the pixel values of all pixels whose coordinates meet the following requirements are read: the row coordinates are X-3 to X +3 and the column coordinates are Y-3 to Y + 3. The purpose of this step is to find all the pixels that make up the anchor image.
The feature code is then determined based on the number of all pixels found. For example: if there are 4 pixels with pixel values above 40, recording the feature code as 4; if there are 9 pixels having a pixel value of 40 or more, the feature code is recorded as 9.
Then, the pixel with the minimum coordinate value is found out from all the pixels with the pixel values of more than 40, and the coordinate of the pixel is recorded as image position data. The method for determining the pixel with the minimum coordinate value comprises the following steps: among all pixels having a coordinate value of 40 or more, a pixel having the smallest row coordinate value is selected, and if there are only 1 pixel, the pixel has the smallest coordinate value. If there are a plurality of pixels, the pixel with the smallest column coordinate value is selected as the pixel with the smallest coordinate value.
The aforementioned preselected region refers to a region divided in the positioning image, and the division method of the positioning region is as follows: in a matrix of 320 × 240 pixels, the first 3 rows, the second 3 rows, the first 3 columns and the second 3 columns of pixels are removed, and the remaining pixels constitute a preselected area of the positioning image. That is, all pixels having row coordinates of 3 to 236 and column coordinates of 3 to 316 constitute the preselected region.
The preselected regions referred to in the embodiments of the present description are all the same selected ranges. In the positioning marks with different characteristics in this embodiment, the number of constituent pixels of the positioning mark image formed in the positioning image is different, and the number of constituent pixels represents the image characteristics of the positioning mark image. If the light source image is not complete, the determination of the image characteristics may be affected, and thus, errors may occur in the feature codes obtained according to the image characteristics. While the whole constituent pixels of the image of the localization marker found in the preselected area are certainly within the entire 320 x 240 matrix of pixels. This is the purpose of setting the preselected area.
If the image of the positioning mark in the positioning image is larger, the number of the formed pixels is more. The preselected region may be subdivided, for example, by removing the first 5 rows, the second 5 rows, the first 5 columns, and the second 5 columns of pixels in the pixel matrix, and then collecting the remaining pixels as the preselected region.
The embodiment also provides an infrared induction strip, which comprises a bracket 300 and 2 infrared light sources. As shown in FIG. 8, the stent 300 is rectangular, 25CM long, 2CM high, and 2CM thick. The left end and the right end of the front surface are respectively provided with 1 square hole, and the side lengths of the square holes are respectively 2 millimeters and 3 millimeters. The holes are respectively provided with 1 infrared light-emitting diode, and the brightness of the infrared light-emitting diodes is the same. The two holes emitting infrared light are infrared light sources, wherein a hole with a side length of 2 mm has a light-emitting area of 4 mm square and a hole with a side length of 3 mm has a light-emitting area of 9 mm square. The interior of the holder 300 may be hollow to facilitate the accommodation of the infrared light emitting diodes.
The 2 infrared light sources can be used as 2 positioning marks and form an air mouse system together with the air mouse of the embodiment. The images formed by the 2 infrared light sources in the positioning image have different composition numbers. For example, when the distance between the air mouse and the infrared light source is about 1M, an image formed by the infrared light source with a light-emitting area of 4 square millimeters is composed of 4 pixels; the image formed by the infrared light source with the light-emitting area of 9 square millimeters consists of 9 pixels. The processing unit determines the feature code according to the number of pixels of the positioning mark image.
The method for acquiring the feature code and the image position data by the air mouse according to the embodiment will be described in detail below.
In the 1-frame positioning image output by the image acquisition unit, the pixel value of 4 pixels is 100, and the coordinates are (120, 3), (120, 4), (121, 3) and (121, 4), respectively.
The processing unit reads the pixel value of each pixel one by one in the order from left to right and from top to bottom in the preselected area, and reads the pixel with the 1 st pixel value of more than 40 when reading the pixel with the coordinates of (120, 3).
With this pixel as the center, the pixel values of 49 pixels within the range of 7 pixels × 7 pixels around are read. I.e., reading pixel values of all pixels in the range of row coordinates 117 to 123 and column coordinates 0 to 6. The pixel value of 4 pixels is found to be 100, and the coordinates are (120, 3), (120, 4), (121, 3), (121, 4), respectively.
Since the found pixels having pixel values of 40 or more total 4, the feature code is recorded as 4; at the same time, the image position data is recorded as (120, 3) based on the coordinate of the pixel whose coordinate value is the smallest.
And after the image processing is finished, the processing unit sends the obtained feature codes and the corresponding image position data to the controlled equipment.
Another embodiment of the infrared sensor strip of this embodiment is shown in fig. 9, in which 1 longitudinal rectangular hole is formed at the left end of the front surface of the bracket 300. Two longitudinal rectangular holes are formed in the right end of the base, and the distance between the two rectangular holes is 1 mm. The 3 rectangular holes are all 4 mm in height and 1 mm in width. The infrared light emitting diodes with the same brightness are arranged behind each rectangular hole, and the two rectangular holes at the right end can share 1 infrared light emitting diode. The 3 holes can be used as two positioning marks, wherein the hole at the left end is used as one positioning mark, and the two holes at the right end are used as one positioning mark together. The images formed by the two positioning marks in the positioning image are composed of different numbers of pixels.
In an embodiment of the infrared sensor strip of this embodiment, the infrared sensor strip includes a bracket and 3 infrared light sources. The support is rectangular, 45CM long, 2CM high and 2CM thick. The front of the bracket is provided with 3 square holes, two ends of each square hole are close to the end heads, and the middle of each square hole is provided with one square hole. Square holes at two ends are provided, and the side length is 2 mm; the middle hole has a side length of 3 mm. Behind the 3 holes are 3 infrared leds with the same brightness. Preferably, the 3 infrared light emitting diodes have a luminous flux of 10 lumens. The 3 square holes capable of emitting infrared light are 3 infrared light sources, and can be used as 3 positioning marks and an air mouse to form an air mouse system. Compared with the scheme of two positioning marks, the moving length of the air mouse can be further increased.
In one embodiment of the infrared sensor strip of this embodiment, the infrared sensor strip includes a bracket and 4 infrared light sources. Preferably, the stent is rectangular, 25CM long and 8CM high. Four corners of the rectangular bracket are respectively provided with a square hole. Each hole has a different edge length. The hole side length of the upper left corner is 1 mm, the hole side length of the lower left corner is 2 mm, the hole side length of the upper right corner is 3 mm, and the hole side length of the lower right corner is 4 mm. An infrared light emitting diode is located behind each aperture. Preferably, the luminous flux of the 4 infrared light emitting diodes is 10 lumens. The 4 holes capable of emitting infrared light are the infrared light source and can be used as 4 positioning marks to form an air mouse system together with an air mouse. Compared with the scheme of 2 positioning marks, the air mouse can simultaneously increase the moving length in the transverse direction, the longitudinal direction and the oblique direction. In the infrared sensor strip of the present embodiment, as shown in fig. 10, 4 infrared light sources may be disposed on one rectangular frame, which can reduce the weight and improve the appearance.
In one embodiment of this embodiment, the processing unit obtains feature codes and image position data of all the localization marker images when processing a frame of localization image. The adopted image processing method comprises the following steps:
s1, reading pixel values of all pixels in a preselected area, and finding out the pixels with all pixel values above 40 as alternative pixels.
S2, selecting a pixel with the minimum coordinate value from the candidate pixels, taking the pixel as the center, reading the pixel values of all pixels in the range of 7 pixels multiplied by 7 pixels around, and finding out the pixels with all pixel values more than 40, wherein the pixels are called as component pixels.
S3, recording the total number of the formed pixels as a feature code; then, a pixel with the minimum coordinate value is found out from the component pixels, and the coordinate of the pixel is recorded as the image position data corresponding to the feature code.
S4, removing the constituting pixels from the alternative pixels, and starting processing from the second step if the alternative pixels are left; if there are no alternative pixels, the image processing process ends.
The image processing method is exemplified below.
In the 1 frame positioning image output by the image acquisition unit, 5 pixels with the pixel value of 100 are provided, and the coordinates are (5, 6), (10, 12), (10, 13), (11, 12) and (11, 13). The pixel values of the remaining pixels are 0. Wherein the pixels (5, 6) are the image of the positioning mark A, and the rest pixels form the image of the positioning mark B. The processing unit performs the following image processing:
s1, reading pixel values of all pixels in a preselected area, and finding 5 pixels with pixel values more than 40 as alternative pixels, wherein coordinates are (5, 6), (10, 12), (10, 13), (11, 12) and (11, 13).
S2, finding the pixel with the minimum coordinate value (5, 6), and reading the pixel values of all pixels in the range of 7 pixels multiplied by 7 pixels around the pixel, wherein the found component pixel is (5, 6).
S3, the component pixel is 1, so that the feature code is recorded as 1; the image position data is recorded as (5, 6).
And S4, removing the component pixels (5 and 6) to leave the alternative pixels (10 and 12), (10 and 13), (11 and 12) and (11 and 13).
Returning to S2, the pixel (10, 12) with the smallest coordinate value is found, and after the pixel values of all pixels in the 7 pixel × 7 pixel range around the pixel are read, the component pixels (10, 12), (10, 13), (11, 12), (11, 13) are found. The feature code is recorded as 4, and the image position data is recorded as (10, 12). After the component pixels are removed from the candidate pixels, there are no remaining candidate pixels.
And (5) finishing the image processing process, wherein the acquired image data is as follows: feature code 1 and corresponding image position data (5, 6); feature codes and corresponding image position data (10, 12).
In the infrared induction strips in the embodiment, the infrared light-emitting diodes in the infrared induction strips are replaced by light-emitting diodes which emit visible light. The positioning mark can also be provided for the air mouse, and the image acquisition unit of the air mouse is an image acquisition unit capable of shooting visible light. Preferably, the light emitting diode has a luminous flux of 10 lumens.
EXAMPLE III
The present embodiment provides an air mouse, which is different from the air mouse in the first embodiment in the method for acquiring image data. The processing unit of the air mouse of the embodiment adopts the following image processing method for the positioning image:
s1, reading the pixel value of each pixel in a preselection area according to the sequence from left to right and from top to bottom. Until a pixel with a pixel value above 40 is found.
S2, taking the pixel as a center, reading the pixel values of 49 pixels in the range of 7 multiplied by 7, and finding out the pixels of which all the pixel values are more than 40. If the column coordinates of all the found pixels are the same, recording the feature code as 1; if the line coordinates are the same, the feature code is recorded as 2. And then finding out the pixel with the minimum coordinate value from all the found pixels, and recording the coordinate of the pixel as the image position record corresponding to the feature code.
The embodiment provides an infrared induction strip. The infrared sensor strip is shown in fig. 11, and includes a bracket and two infrared light sources. The bracket 300 is rectangular, and two rectangular holes are respectively formed at two ends of the front surface. Both rectangular holes were 4 mm long and 1 mm wide. The left end is provided with a longitudinal hole, and the right end is provided with a transverse hole. Behind each aperture there is 1 infrared light emitting diode, preferably both having a luminous flux of 10 lumens. The two rectangular holes capable of emitting infrared light are two infrared light sources. Can be used as two positioning marks and form an air mouse system with the air mouse of the embodiment.
The following illustrates an image processing method of an air mouse according to the present embodiment:
in the 1-frame positioning image output by the image acquisition unit, the pixel values of 4 pixels with coordinates (119, 316), (120, 316), (121, 316) and (122, 316) are 100, and the pixel values of the rest pixels are 0. The processing unit acquires the feature codes and the image position data by the following method.
S1, in a preselection area, reading the pixel value of each pixel one by one according to the sequence from left to right and from top to bottom, and finding the pixel with the 1 st pixel value above 40 when reading the pixel with the coordinates of (119, 316).
S2, taking the pixel as a center, reading the pixel value of 49 pixels in the range of 7 pixels multiplied by 7 pixels around. I.e., reading pixel values for all pixels in the row coordinate range of 116 to 122 and the column coordinate range of 313 to 319. The results were: the pixel value of 4 pixels is more than 40, and the coordinates are respectively (119, 316), (120, 316), (121, 316), (122, 316).
Since the column coordinates of the found 4 pixels are the same, the feature code is recorded as 1. Of the found 4 pixels, the coordinate of the pixel with the smallest coordinate value is (119, 316), and this coordinate is the image position data corresponding to the feature code 1.
The processing method acquires the feature codes and the image position data of 1 positioning mark image in 1 frame of positioning image. In an implementation manner of this embodiment, when the processing unit processes 1 frame of positioning image, the processing unit obtains feature codes and image position data of all positioning mark images, and the specific method is as follows:
s1, reading pixel values of all pixels in a preselected area, and finding out the pixels with all pixel values above 40 as alternative pixels.
S2, selecting a pixel with the minimum coordinate value from the candidate pixels, taking the pixel as the center, reading the pixel values of all pixels in the range of 7 pixels multiplied by 7 pixels around, and finding out the pixels with all pixel values more than 20, wherein the pixels are called as component pixels.
S3, if the column coordinates of all the found pixels are the same, recording the image code as 1; if the line coordinates of all the constituent pixels are the same, the image code is recorded as 2. Then, a pixel with the minimum coordinate value is found out from all the component pixels, and the coordinate of the pixel is recorded as the image position.
S4, removing pixels (several pixels) with the same coordinates as the pixels in the alternative pixels, and starting processing from the second step if the alternative pixels remain; if there are no candidate pixels, the step of acquiring image data ends.
In the present embodiment, an image processing method in the step of acquiring image data is as shown in fig. 12.
The following examples illustrate the processing steps.
In the 1 frame positioning image output by the image acquisition unit, there are 8 pixels with a pixel value of 100, and the coordinates are (119, 3), (120, 3), (121, 3), (122, 3), (120, 315), (120, 316), (120, 317), and (120, 318), respectively. The first 4 pixels constitute the anchor a image and the last 4 pixels constitute the anchor B image.
The processing unit acquires the feature codes and the image position data by adopting the following method:
s1, reading pixel values of all pixels in a preselected area, and finding out 6 pixels with pixel values of more than 40 as alternative pixels, wherein coordinates of the alternative pixels are (119, 3), (120, 3), (121, 3), (122, 3), (120, 315), (120, 316).
And S2, selecting a pixel with the minimum coordinate value from the candidate pixels, wherein the coordinate of the pixel is (119, 3). With this pixel as the center, the pixel values of all pixels in the range of 7 pixels × 7 pixels around the pixel are read, and 4 pixels with pixel values of 40 or more are found as constituent pixels (the 4 pixels are all pixels constituting the anchor mark a image), and the coordinates are (119, 3), (120, 3), (121, 3), (122, 3), respectively.
S3.4, recording the feature code as 1, wherein the column coordinates of the pixels are the same; find out the minimum pixel of coordinate value among 4 component pixels, its coordinate is (119, 3). The coordinates are recorded as image position data corresponding to the feature code 1.
And S4, in the alternative pixels, removing the constituent pixels. There are two alternative pixels left, with coordinates (120, 315), (120, 316), so the following operations continue:
the pixel with the minimum coordinate value is selected from the candidate pixels, and the coordinate of the pixel is (120, 315). With this pixel as the center, the pixel values of all pixels in the range of 7 pixels × 7 pixels around the pixel are read, and 4 pixels with pixel values of 20 or more are found as constituent pixels (the 4 pixels are all pixels constituting the anchor mark B image), and the coordinates are (120, 315), (120, 316), (120, 317), (120, 318), respectively.
The line coordinates of 4 constituent pixels are the same, and the feature code is recorded as 2; finding out a pixel with the minimum coordinate value from the 4 constituent pixels, wherein the coordinate is (120, 315), and recording the coordinate as the image position record corresponding to the feature code 2.
And (4) in the 2 candidate pixels, after the component pixels are removed, no remaining candidate pixels exist, and the image processing is carried out. The acquired image data comprises feature codes 1 and corresponding image position data (119, 3); feature code 2 and corresponding image location data (120, 315).
In one embodiment of the infrared sensor strip of this embodiment, it comprises a support and 3 infrared light sources. As shown in FIG. 13, the stent is rectangular, 45CM long, 2CM high, and 2CM thick. The front of the rectangular bracket is provided with 3 rectangular holes, the positions of two ends close to the end heads are respectively one, and the position in the middle of the rectangular bracket is one. The length of each rectangular hole is 4 mm, and the width of each rectangular hole is 1 mm, and the rectangular holes at two ends are longitudinal; the middle rectangular hole is transverse. The distance between the holes at the two ends and the hole in the middle is 20 CM. Behind the 3 holes are 3 infrared leds with the same brightness. Preferably, the 3 infrared light emitting diodes have a luminous flux of 10 lumens. The 3 rectangular holes capable of emitting infrared light are 3 infrared light sources, and can be used as 3 positioning marks to form an air mouse system together with the air mouse of the embodiment. Compared with the scheme of two positioning marks, the moving length of the air mouse can be further increased.
In one embodiment of the infrared sensor strip of this embodiment, a bracket and a plurality of infrared light sources are included. As shown in FIG. 14, the stent 300 is rectangular, 25CM long and 8CM high. A longitudinal rectangular hole is arranged at the upper left corner, the length is 4 mm, and the width is 1 mm; the upper right corner is provided with 1 transverse rectangular hole, the length is 4 mm, and the width is 1 mm; the left lower corner is provided with two longitudinal rectangular holes which are arranged in parallel, the distance between the two rectangular holes is 1 mm, the length of each rectangular hole is 2 mm, and the width of each rectangular hole is 1 mm; in the lower right corner there are two transverse rectangular holes, arranged in parallel, 1 mm apart, 2 mm long and 1 mm wide. Behind each hole is a light emitting diode, preferably an infrared light emitting diode, with a luminous flux of 10 lumens. Rectangular holes capable of emitting infrared light on the bracket can be used as positioning marks, wherein the longitudinal rectangular holes at the upper left corner are used as one hole; the transverse rectangular hole at the upper right corner is taken as one; two longitudinal rectangular holes at the lower left corner are used as one; the two transverse rectangular holes at the lower right corner are used as one, and the total number is 4 positioning marks.
When the infrared induction strip provides the positioning marks for the air mouse, images of all the positioning marks in the positioning images output by the image acquisition unit are composed of 4 pixels, and the arrangement modes are different. Wherein, the image formed by the positioning mark at the upper left corner is longitudinally arranged in a row by 4 pixels; the image formed by the positioning mark at the upper right corner is transversely arranged in a row by 4 pixels; the left lower corner of the positioning mark is longitudinally arranged into two rows by 4 pixels, each row is provided with 2 pixels, and the distance between the two rows is 1 pixel; the image of the positioning mark at the lower right corner is transversely arranged into two rows by 4 pixels, each row has 2 pixels, and the distance between the two rows is 2 pixels; for example, the coordinates of the 4 pixels forming the image of the positioning mark at the lower left corner are (0, 0), (0, 2), (1, 0), and (1, 2); the coordinates of the 4 pixels forming the image of the positioning mark at the lower right corner are (0, 0), (0, 1), (2, 0), and (2, 1), respectively. The processing unit determines the rule of the feature code to be that if the column coordinates of all the pixels having the found pixel value of 40 are the same, the feature code is recorded as 1; if the line coordinates are the same, recording the feature code as 2; if the column coordinates are two pixels which are the same in pairs and the column coordinates are the same, and the values of the row coordinates of the two pixels are adjacent numbers, recording the feature code as 3; if the column coordinates are two pixels which are the same in pairs and the column coordinates are the same, the value of the row coordinate is not a neighborhood number, and the feature code is recorded as 4.
In the several infrared induction strips in this embodiment, the infrared light emitting diode is replaced by the light emitting diode, and a positioning mark can also be provided for the air mouse, and at this time, the image acquisition unit of the air mouse is an image acquisition unit capable of shooting visible light scenes. Preferably, the light emitting diode has a luminous flux of 10 lumens.
Example four
The present embodiment provides an air mouse, which is different from the air mouse in the first embodiment in the processing manner of the positioning image. When the processing unit of the air mouse of this embodiment processes each frame of positioning image, it first performs image processing, and then sends image position data or a rebranding signal to the controlled device according to the image processing result.
The image processing comprises two steps:
firstly, acquiring the feature code and image position data of the positioning mark image.
And secondly, searching the same feature codes in the feature codes acquired by the positioning image of the previous frame processed according to the acquired feature codes, and finishing the image processing process after obtaining a result.
According to different results of image processing, the type of data sent to the controlled device by the processing unit is different. If the same feature code is found, the processing unit sends image position data corresponding to the feature code; if the same feature code is not found, the processing unit sends a rebranding signal. The processing manner of the positioning image by the processing unit is shown in fig. 15.
The embodiment also provides a driving program, which is installed on the controlled device of the air mouse and used for controlling the movement of the cursor on the screen driven according to the image position data or the rebranding signal.
The driver program is preset with a recording area for recording the video position data. When the driver receives the image position data, if the image position data exists in the recording area, comparing the received image position data with the image position data in the recording area, controlling the cursor on the screen to move according to the obtained image displacement data, and updating the data in the recording area into the just-received image position data; if there is no image position data in the recording area, the image position data just received is stored in the recording area. And when the driver receives the label changing signal, clearing the recording area.
Fig. 16 shows a processing method of the driver of the present embodiment when receiving image position data.
In the air mouse in the present embodiment, when the feature code and the image position data are obtained from the positioning image, the various methods described in the first embodiment, the second embodiment, and the third embodiment are used.
The embodiment also provides an air mouse system, which is composed of the air mouse in the embodiment and at least two positioning marks with different characteristics. The positioning marks comprise various positioning marks recorded in the first embodiment, the second embodiment and the third embodiment.
The processing method and the driver of the present embodiment are exemplified below.
The image acquisition unit outputs a 1 st frame positioning image, and the processing unit acquires the feature code 50 and corresponding image position data (120, 318). Because it is frame 1, there is no recording of the previous frame. Therefore, a digital 1 is sent to the controlled device, and the digital 1 is the label changing signal. And after receiving the label changing signal, the driving program of the controlled equipment end clears the data in the recording area.
The image acquisition unit outputs a 2 nd frame positioning image, and the processing unit acquires the feature codes 50 and corresponding image position data (120, 317). From the feature code 50, the same is found in the feature code obtained from the previous frame of the scout image. Then (120, 317) is sent to the controlled device. After receiving the image displacement data, the driver of the controlled device records (120, 317) in the recording area because the recording area is empty.
The image acquisition unit outputs a3 rd frame positioning image, and the processing unit acquires the feature code 50 and corresponding image position data (120, 315). From the feature code 50, the same is found in the feature code obtained from the previous frame of the scout image. Then (120, 315) is sent to the controlled device. After receiving the image displacement data, the driver of the controlled device compares (120, 315) with (120, 317), and finds that the value of the column coordinate is reduced by 2. The cursor on the control screen is moved to the right by 2 pixels. Then, the data in the recording area is updated to (120, 315)
The image acquisition unit outputs a 4 th frame positioning image, and the processing unit acquires the feature code 100 and corresponding image position data (120, 312). According to the feature code 100, the same is not found in the feature code of the previous frame positioning image, so the number 1 is transmitted to the controlled device. And the driver of the controlled equipment side clears the data in the recording area.
The image acquisition unit outputs a 5 th frame positioning image, the processing unit acquires the feature code 100 and corresponding image position data (120, 310), and the feature codes of the last frame positioning image are the same according to the feature code 100. And (120, 310) the data is sent to the controlled device, and the driver of the controlled device records the data in the recording area.
The image acquisition unit outputs a 6 th frame positioning image, the processing unit acquires the feature code 100 and corresponding image position data (120, 308), and the feature code of the last frame positioning image is the same according to the feature code 100. And (120, 308) is sent to the controlled device, the driver compares (120, 308) with (120, 310), and if the numerical value of the column coordinate is less than 2, the cursor on the control screen moves to the right by 2 pixels.
After the driver compares the image position data, the driver controls the movement of the cursor on the screen according to the comparison result, and the following rules are adopted: if the numerical value of the row coordinate is increased by n, controlling a cursor on the screen to move n pixels upwards; if the value of the row coordinate is decreased by n, the cursor on the control screen is moved down by n pixels. If the numerical value of the column coordinate is increased by n, controlling a cursor on the screen to move n pixels to the left; if the value of the column coordinate is decreased by n, controlling the cursor on the screen to move to the right by n pixels; if the numerical values of the row coordinate and the column coordinate are changed, the cursor is controlled to move correspondingly in two directions. For example, if the measured values of the row coordinate and the column coordinate are increased by 1, the cursor on the control screen is moved up by 1 pixel and then moved left by 1 pixel.
In this embodiment and the first to third embodiments, the driver and the controlled device constitute a computer system.
In the above process, it can be seen that the cursor on the screen controlled by the air mouse is not moved when the positioning mark as the reference object of the air position is replaced, and the cursor continues to move in the original direction after the reference object is replaced. The air mouse of the embodiment can also increase the moving length by adding the positioning mark.
EXAMPLE five
The present embodiment provides an air mouse, which is different from the air mouse in the first embodiment in the processing manner of the positioning image. When the processing unit of the air mouse of the embodiment processes each frame of positioning image, the image displacement data is obtained through image processing.
The processing unit of the air mouse of the embodiment performs image processing on each frame of positioning image, and comprises two steps:
step one, acquiring the feature code and image position data of the positioning mark image. The specific method includes various methods described in the first embodiment, the second embodiment, and the third embodiment.
And step two, acquiring image displacement data.
The specific method comprises the following steps: and searching whether the feature codes acquired by the positioning image of the previous frame are the same or not according to the acquired feature codes. If the two frames of positioning images are the same, comparing the corresponding image position data of the feature code in the two frames of positioning images, and obtaining image displacement data according to the comparison result; if not, the image processing of the present frame positioning image is finished. The image processing method is shown in fig. 17.
After the image processing process is finished, if image displacement data are generated, the processing unit sends a displacement signal to the controlled device according to the image displacement data, and the controlled device controls the cursor on the driven screen to move correspondingly according to the displacement signal.
The air mouse and the image processing method according to the present embodiment will be described below by taking the air mouse moving from left to right in the air as an example.
The air mouse and the two positioning marks form an air mouse system, the two positioning marks are two infrared light sources, the implementation mode is the same as that shown in fig. 2, and the two infrared light sources are transversely arranged and respectively used as a positioning mark A and a positioning mark B.
The image processing program used by the processing unit establishes a recording area for recording data (including feature codes and image position data) obtained from the positioning image, and updates the data in the recording area when the positioning image processing process is finished for each frame.
The user holds the air mouse and directs the lens 202 toward the landmark a. The image acquisition unit starts to capture and output a 1 st frame positioning image in which the pixel values of four pixels having coordinates (120, 318), (120, 319), (121, 318), (121, 319) are 50 and the pixel values of the remaining pixels are 0. These four pixels constitute the anchor mark a image. The image data acquired by the processing unit from the frame alignment image are 50 and (120, 318).
The processing unit looks for the same in the recording area based on the feature code 50. Since this is the first frame alignment image processed, there is no data in the recorded area, and after 50 and (120, 318) are stored in the recorded area, the 1 st frame alignment image processing ends.
The user holds the air mouse and moves right continuously, the pixel values of four pixels with coordinates (120, 317), (120, 318), (121, 317), (121, 318) in the second frame positioning image output by the image acquisition unit are 50, and the pixel values of the rest pixels are 0. The processing unit obtains feature codes 50 and image position data from the frame alignment image (120, 317).
The processing unit finds the same in the recording area based on the feature code 50. Comparing (120, 317) with (120, 318) it was found that the value of the column coordinate was reduced by 1. This is the image displacement data.
According to the image displacement data, the processing unit sends a displacement signal to the controlled equipment, and information contained in the displacement signal moves to the right by 1. After the data in the recording area is updated to 50, (120, 317), the present frame positioning image processing is ended. The controlled device controls the driven cursor on the screen to move 1 pixel to the right (the screen faces the user) according to the displacement signal.
The user holds the air mouse and continues to move rightwards, the image acquisition unit continuously shoots and outputs positioning images, the position of the image of the positioning mark A gradually moves leftwards in a series of output positioning images, the processing unit continuously outputs a displacement signal according to the image processing result, and the controlled equipment controls the cursor on the screen to continuously move rightwards according to the displacement signal.
When the image acquisition unit outputs the positioning image of the nth frame, the processing unit obtains the feature code 50 and the corresponding image position data (120, 0) from the positioning image, and at this time, the positioning mark A reaches the edge of the visible range of the lens of the image acquisition unit. After the frame alignment image processing is completed, the data in the recording area is 50, (120, 0).
After the user holds the air mouse and moves a distance to the right, only the positioning mark B is arranged in the lens. At the moment, the image acquisition unit outputs an n +1 th frame positioning image, wherein the pixel values of four pixels are 100, and the pixel values of the rest pixels are 0. The four pixels constitute the landmark B image, and the coordinates of the four pixels are (120, 317), (120, 318), (121, 317), (121, 318), respectively.
The image data acquired by the processing unit in the frame alignment image are 100 and (120, 317). According to the feature code 100, the same is not found in the recording area. After the data in the recording area is updated to 100, (120, 317). The image processing process of the frame positioning image is finished.
After the user holds the air mouse and moves a little distance to the right, the pixel value of 4 pixels in a frame of positioning image output by the image acquisition unit is 100, and the pixel values of the rest pixels are 0. The coordinates of the 4 pixels are: (120, 316), (120, 317), (121, 316), (121, 317). After the processing unit carries out image processing on the frame positioning image, a displacement signal which moves to the right by 1 is sent to the controlled equipment, and the cursor on the controlled equipment control screen moves to the right by 1 pixel. The air mouse continuously moves to the right, and the cursor on the screen also continuously moves to the right until the positioning mark B moves out of the visual range of the lens.
In this embodiment, the method for obtaining the image displacement data according to the feature code and the image position data is the same as the method for obtaining the image displacement data according to the feature code and the image position data by the driver in the first embodiment. The obtained image displacement data is also used as the basis for controlling the movement of the cursor on the screen, so the beneficial effects are the same.
In this embodiment, when comparing the image position data, the row coordinates and the column coordinates are compared respectively to find out the change of the value. The comparison results include four cases, row coordinate increase, row coordinate decrease, column coordinate increase, and column coordinate decrease. The air mouse of the embodiment sends the displacement signal according to the image displacement data, and the adopted rule is as follows: if the value of the row coordinate is increased by n, outputting a displacement signal which is moved up by n; if the value of the row coordinate is decreased by n, a displacement signal shifted down by n is output. If the value of the column coordinate is increased by n, outputting a displacement signal which is shifted to the left by n; if the value of the column coordinate is decreased by n, outputting a displacement signal which moves to the right by n; and the controlled equipment controls the driven cursor on the screen to move n pixels to the corresponding direction according to the received displacement signal. If the values of the row coordinate and the column coordinate are changed, the displacement signal is reflected. For example, if the measured values of the row coordinate and the column coordinate are increased by 1, the information included in the output displacement signal is shifted up by 1 and shifted left by 1. The control equipment receives the displacement signal, controls the cursor on the screen to move 1 pixel upwards and then move 1 pixel leftwards. The specific format of the displacement signal may refer to the data format of the optical mouse.
In an embodiment of the air mouse according to the present embodiment, the rule for sending the displacement signal according to the image displacement data is: if the numerical value of the column coordinate is increased, outputting a displacement signal moving leftwards; the value of the column coordinate decreases, and a displacement signal moving rightward is output. Thus, when moving transversely, the moving direction of the cursor on the screen is opposite to the moving direction of the air mouse. The direction of longitudinal movement can also be modified according to the same method.
In one embodiment of the air mouse of the present embodiment, the processing unit multiplies the image displacement data by 2 and outputs the result. For example, the value of the measured column coordinate is decreased by 1, and a displacement signal shifted to the right by 2 is output. As required. The image displacement data may be multiplied by 3, 4 or more and output.
EXAMPLE six
The present embodiment provides an air mouse, as shown in fig. 18, including a housing, an image capturing unit, and a processing unit. The image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; and the processing unit performs image processing on the positioning image and sends a cursor control signal to the controlled equipment according to an image processing result. When the positioning image is processed, the characteristic code and the image position data of the positioning mark image are obtained.
The air mouse of the present embodiment is equivalent to the air mouse of the first to fifth embodiments in which the key unit is removed.
In this embodiment, when the processing unit of the hollow mouse performs image processing on a frame of positioning image, the various image processing methods recorded in the first to fifth embodiments may be adopted.
The air mouse of the embodiment can be made smaller in size because of no key unit. For example, a cube with a side of 5cm is used as a housing, and a hole is opened on 1 surface of the housing to place the lens of the image capturing unit.
The air mouse is particularly suitable for intelligent mobile equipment, and the intelligent mobile equipment comprises a smart phone and a tablet computer. When the intelligent mobile phone is used for the intelligent mobile phone, a player holds the mobile phone, and the screen faces the player. The air mouse is pasted on the back of the mobile phone, and the lens is positioned on the surface opposite to the pasting surface, namely the lens faces to the front of the player (the lens faces to the same direction as the lens on the back of the mobile phone). The processing unit is connected with the mobile phone through a data line. In front of the player, a positioning mark (1 or more may be placed as needed) is placed at a position where the lens can be shot. The player can control the cursor on the screen to move by holding the mobile phone to move in the air.
EXAMPLE seven
The present embodiment provides an air mouse, and on the basis of the air mouse in the first to fifth embodiments, the processing unit is divided into an image processing unit and a main control unit. The air mouse of this embodiment is shown in fig. 19, and includes a housing, an image capturing unit, an image processing unit, a key unit, and a main control unit. The image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; the image processing unit carries out image processing on the positioning image and sends an image processing result to the main control unit; the main control unit sends a cursor control signal to the controlled equipment according to the image processing result; the key signal generated by the key unit is sent to the controlled equipment through the main control unit; when the image processing unit processes the positioning image, various image processing methods adopted by the processing units in the first to fifth embodiments are adopted.
When the image processing method adopted by the image processing unit is any one of the first embodiment, the second embodiment, or the third embodiment, a set of image data (including the feature code and the image position data) is obtained every time a frame of positioning image is processed, and the set of image data is sent to the master control unit and is sent to the controlled device by the master control unit.
When the image processing method adopted by the image processing unit is any one of the fourth embodiment, each time a frame of positioning image is processed, the obtained image position data or the obtained trade mark signal is sent to the controlled device through the main control unit.
When the image processing method adopted by the image processing unit is any one of the fifth embodiment, the obtained image displacement data is sent to the main control unit every time a frame of positioning image is processed. And the main control unit sends a displacement signal to the controlled equipment according to the image displacement data. And sending a displacement signal according to the image displacement data, wherein the rule is the same as that in the fifth embodiment.
The image processing unit of the air mouse of the present embodiment may be implemented by one image processing chip. The image processing chip may be a general purpose microprocessor (e.g., a single chip microcomputer) or a Digital Signal Processor (DSP). Preferably, the image processing chip is an STM32F407 chip of an intentional semiconductor, an input end of the chip is electrically connected with the image acquisition unit, and an output end of the chip is electrically connected with the main control unit. The image processing chip can also be other microprocessors, such as a single chip microcomputer or a DSP.
The image processing chip in this embodiment adopts the image processing method of the present invention. The air mouse using the image processing chip can increase the moving length by increasing the positioning mark.
Example eight
The embodiment provides a cloud game system. The system comprises an air mouse, client equipment and a cloud server. The air mouse comprises a shell, an image acquisition unit, a key unit and a processing unit; the image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; the key unit generates a key signal and sends the key signal to the client equipment through the processing unit; the processing unit carries out image processing on the positioning image and sends a cursor control signal to the client device according to an image processing result. The processing unit processes the positioning image by adopting various image processing methods in the first to fifth embodiments of the invention.
And the client equipment sends the cursor control signal and the key signal to the cloud server. The cloud server is used for running a game program, generating screen display data according to the cursor key signal and the key signal and sending the screen display data to the client device, and the client device controls the display content on the screen of the display device driven by the client device to be changed correspondingly according to the screen display data.
The cursor control signals output by the air mouse are different by adopting different image processing methods. And when the received cursor control signal is a displacement signal, the cloud server generates screen display data according to the displacement signal. When the received cursor control signal is the feature code and the image position data, the cloud server is provided with the driving program described in the first embodiment, the image displacement data is obtained through the driving program, and the screen display data is generated according to the displacement data. When the received cursor control signal is the image position data and the label changing signal, the cloud server is provided with the driving program described in the fourth embodiment, the image displacement data is obtained through the driving program, and the screen display data is generated according to the image displacement data.
For example, the client device is a PC that drives a liquid crystal display. The cloud server runs a shooting game, and the air mouse controls the sight on the screen. The air mouse sends a displacement signal (containing information of 10 pixels moving to the right) to the PC, the PC sends the signal to the cloud server, and the cloud server generates a video data packet (the video data packet comprises all data required for displaying 1 frame of picture on the screen) through calculation. The video data packet is sent to a PC, the PC drives a liquid crystal display according to the data in the video data packet, and a picture of the front sight moving 10 pixels to the right is displayed on the liquid crystal display. When the air mouse continuously sends out displacement signals, the pictures on the screen also continuously change.
The air mouse sends a key signal to the PC, and the instruction contained in the key signal is to change the clip. The key signal is transmitted to the cloud server through the PC. The cloud server generates 50 video data packets through calculation and continuously generates the video data packets to the PC, each video data packet comprises data required for displaying one frame of picture, the PC drives the liquid crystal display to display one frame of picture when being connected with one video data packet, and the connection of the 50 frames of pictures is the continuous action of changing the clip of the character in the game.
The displacement signal sent by the air mouse can be in the following format: the moving directions of the upper, lower, left and right are represented by 1, 2, 3 and 4 respectively, and the number of pixels moved is represented by adding numbers to the rear. And the cloud server outputs a video data packet according to the received displacement signal and controls the display content on the screen. For example, when 2+5 is received, the cloud server controls a screen display cursor driven by the PC to move down to a picture after 5 pixels; and 3+2 is received, and the screen is controlled to display the picture after the cursor moves 2 pixels to the left.
The cloud server controls the screen display content according to the image displacement data, and the following rules are adopted: and respectively controlling the cursor on the screen to move in 4 directions of up, down, left and right under four different conditions of line coordinate increase, line coordinate decrease, column coordinate increase and column coordinate decrease, wherein the moving numerical value is equal to the numerical value of coordinate change. For example, the line coordinates of the image displacement data are increased by 5, and the cloud server controls the screen display cursor driven by the PC to move up to the screen after 5 pixels.
The controlled devices of the cloud game system of the embodiment are various types of electronic computer devices including, but not limited to, PCs, television game machines, smart televisions, tablet computers, and smart phones.
In this embodiment, the driver and the cloud server constitute a computer system.
The cloud game system of this embodiment can increase the movement length of air mouse through the mode that increases the locating mark for air mouse can possess higher movement length and movement accuracy simultaneously, has increased game experience.
The above description is only exemplary of the invention and should not be taken as limiting, since any modifications, equivalents, improvements and the like, which fall within the spirit and principle of the invention, are intended to be included therein.
Claims (10)
1. An image processing method is used for an air mouse and is characterized in that when a positioning image is subjected to image processing, a feature code and image position data of a positioning mark image are obtained; wherein the feature code is determined based on image features of the landmark image.
2. The image processing method according to claim 1, wherein the feature code and the image position data are acquired from the positioning image, and then the feature code is compared with the feature code acquired from the previous positioning image.
3. The image processing method according to claim 1, wherein after the feature code and the image position data are obtained from the positioning image, the feature code and the image position data are compared with those obtained from the previous frame of positioning image, and further image displacement data are obtained.
4. The image processing method of claim 1, wherein the feature code and image position data of one of the localizer images are obtained while the scout image is being processed.
5. The image processing method according to claim 1, wherein the feature codes and image position data of all the anchor mark images are acquired when the anchor image is processed.
6. An air mouse is characterized by comprising a shell, an image acquisition unit and a processing unit;
the image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; the processing unit carries out image processing on the positioning image and sends a cursor control signal to a controlled device according to an image processing result; when the positioning image is processed, the characteristic code and the image position data of the positioning mark image are obtained, wherein the characteristic code is determined according to the characteristics of the positioning mark image.
7. The air mouse of claim 6, further comprising a key unit; the key unit is used for inputting key signals, and the key signals are sent to a controlled device through the processing unit.
8. An air mouse system is characterized by comprising an air mouse and at least two positioning marks with different characteristics;
the air mouse comprises a shell, an image acquisition unit, a key unit and a processing unit; the image acquisition unit is used for shooting a scene comprising a positioning mark and outputting a positioning image with a positioning mark image; the key signal input by the key unit is sent to the controlled equipment through the processing unit; and the processing unit performs image processing on the positioning image and sends a cursor control signal to the controlled equipment according to an image processing result.
The positioning mark images formed in the positioning images have different image characteristics.
9. The air mouse system of claim 8, wherein the different feature landmarks form a landmark image in a localization image having different pixel values for its constituent pixels.
10. The air mouse system of claim 8, wherein the localizer images of the localizers having different characteristics formed in the localizer images are comprised of different numbers of pixels.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910618882.0A CN112214117A (en) | 2019-07-10 | 2019-07-10 | Image processing method and chip of air mouse, air mouse and system |
PCT/CN2020/101074 WO2021004505A1 (en) | 2019-07-10 | 2020-07-09 | Air mouse, air mouse system, image processing method and control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910618882.0A CN112214117A (en) | 2019-07-10 | 2019-07-10 | Image processing method and chip of air mouse, air mouse and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112214117A true CN112214117A (en) | 2021-01-12 |
Family
ID=74047094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910618882.0A Pending CN112214117A (en) | 2019-07-10 | 2019-07-10 | Image processing method and chip of air mouse, air mouse and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112214117A (en) |
WO (1) | WO2021004505A1 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005346145A (en) * | 2004-05-31 | 2005-12-15 | Olympus Corp | Picture processor, method and program for processing picture, and recording medium |
CN102508565B (en) * | 2011-11-17 | 2014-06-25 | Tcl集团股份有限公司 | Remote control cursor positioning method and device, remote control and cursor positioning system |
TW201443705A (en) * | 2013-05-01 | 2014-11-16 | Unigrand Ltd | Cursor locating device and cursor locating method |
CN104731373B (en) * | 2013-12-18 | 2017-12-15 | 原相科技股份有限公司 | Hand-held indicator device and its cursor positioning method |
-
2019
- 2019-07-10 CN CN201910618882.0A patent/CN112214117A/en active Pending
-
2020
- 2020-07-09 WO PCT/CN2020/101074 patent/WO2021004505A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2021004505A1 (en) | 2021-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9245193B2 (en) | Dynamic selection of surfaces in real world for projection of information thereon | |
US7161596B2 (en) | Display location calculation means | |
JP5232478B2 (en) | Information processing program, information processing apparatus, information processing system, and information processing method | |
US8705868B2 (en) | Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method | |
US8699749B2 (en) | Computer-readable storage medium, image processing apparatus, image processing system, and image processing method | |
US10828561B2 (en) | Display apparatus and control method thereof | |
US8718325B2 (en) | Computer-readable storage medium, image processing apparatus, image processing system, and image processing method | |
US9092894B2 (en) | Display control device and display control program for grouping related items based upon location | |
US20120219228A1 (en) | Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method | |
US20120219179A1 (en) | Computer-readable storage medium, image processing apparatus, image processing system, and image processing method | |
JP5037842B2 (en) | Information processing apparatus and program | |
US7853080B2 (en) | System and method for identifying and labeling cluster pixels in a frame of image data for optical navigation | |
US9058064B2 (en) | Interactive image system, interactive control device and operation method thereof | |
US11610375B2 (en) | Modulated display AR tracking systems and methods | |
JP2009200553A (en) | Program information displaying program and program information displaying apparatus | |
CN112214117A (en) | Image processing method and chip of air mouse, air mouse and system | |
KR102495234B1 (en) | Electronic apparatus, method for controlling thereof and the computer readable recording medium | |
US8705869B2 (en) | Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method | |
CN112051919B (en) | Interaction method and interaction system based on position | |
CN114764275A (en) | Image processing method for air mouse, air mouse and system | |
WO2023275920A1 (en) | Trigger signal generation device and mobile terminal | |
CN115857707A (en) | Six-degree-of-freedom positioning method and device, wearable device and readable storage medium | |
KR20140123848A (en) | Method and apparatus for detecting input position on display unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210112 |
|
WD01 | Invention patent application deemed withdrawn after publication |