CN111752456A - Projection interactive system design based on image sensor - Google Patents

Projection interactive system design based on image sensor Download PDF

Info

Publication number
CN111752456A
CN111752456A CN202010604944.5A CN202010604944A CN111752456A CN 111752456 A CN111752456 A CN 111752456A CN 202010604944 A CN202010604944 A CN 202010604944A CN 111752456 A CN111752456 A CN 111752456A
Authority
CN
China
Prior art keywords
image
touch
area
motion
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010604944.5A
Other languages
Chinese (zh)
Inventor
吕明
武国芳
葛宏义
张元�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202010604944.5A priority Critical patent/CN111752456A/en
Publication of CN111752456A publication Critical patent/CN111752456A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Position Input By Displaying (AREA)

Abstract

With the development of the human-computer interaction mode, the requirements of people on the human-computer interaction mode are continuously changed, and aiming at the requirements, a projection interaction system based on an image sensor is designed and realized. Wherein, the hardware platform includes active infrared camera, computer, projecting apparatus and support. The infrared image sensor is adopted for image acquisition, so that the interference of reflected light rays of a projection picture of the projector can be effectively filtered, and good original image data is provided for subsequent image processing. The platform comprises two sets of software systems: one is touch recognition software which is realized by C # and HALCON combined programming and is mainly used for image acquisition, image preprocessing, optical flow tracking, target recognition, target behavior recognition and inter-software communication. And the other is programmed animation effect software which is used for designing interactive contents and displaying touch effects and is programmed by VVV software. The touch identification software identifies touch information, and controls the programmed animation software to play corresponding animation effects through keyboard instructions, so as to realize desktop-level projection interaction.

Description

Projection interactive system design based on image sensor
Technical Field
The invention belongs to the technical field of human-computer interaction, and particularly relates to a projection interaction system design based on an image sensor.
Background
As the research of projection interactive systems is vigorously spread around the world, many laboratories have spread relevant research. The wall projection interactive system based on infrared adopts a background subtraction method to segment a moving target, extracts the contour characteristics of a human body based on an edge detection operator, and identifies the motion of the human body based on a hidden Markov model. However, the algorithm has high complexity and poor real-time performance. A linear infrared light assisted projection interactive system is adopted, an infrared light screen is used for reflecting infrared light at the foot of a person to form a high-brightness crescent shape to highlight a target, and the foot is identified through a two-stage classifier. However, the method is based on the reflected highlight light spot for identification, so that the interference caused by the ambient light is relatively large. In addition, the linear infrared tube can only generate a planar infrared light curtain, so that the linear infrared tube cannot be used for interaction of curved scenes such as sand tables and the like, and can only be used for ground plane interaction. The projection interactive system for target identification by combining binocular vision with the convolutional neural network can effectively inhibit ambient light interference, but the complexity of the binocular vision is high, and the application scene is of a ground interactive type, so that the problem of desktop interaction is not solved.
In consideration of multiple aspects such as universality, cost, real-time performance and effect, the active infrared camera is adopted for projection interaction in the system, and interference of projector picture light can be effectively filtered. The active infrared camera and the projector are hung above the desktop, and the touch action of a plane or a curved surface can be effectively recognized by combining optical flow tracking, fingertip movement recognition and continuous movement characteristic analysis. The active infrared camera projection interactive system has the advantages of simple hardware structure and low cost.
Disclosure of Invention
(1) And initializing the system, and finishing the tasks of one-time processing in the image processing program when the image processing program just starts to run, wherein the tasks comprise camera setting, window setting and touch area demarcation.
1. Camera arrangement
Connecting an infrared camera, reading a camera image by adopting an operator open _ frame marker, selecting Directshow by an interface, and configuring camera parameters according to defaults. In order to ensure that the brightness of the image is enough, the touch area is prepared for being defined below, the image energy is improved by adopting a mode of circularly acquiring for multiple times,
2. window arrangement
HALCON forms are used as an important way for man-machine interaction, and the results in the process of each stage of image processing can be displayed on the forms. The size of the current image is obtained through the operator get _ image _ size, so that the size of the opened image window is automatically adapted to the size of the image, and the problem that the image display range exceeds the display control range during the design of an upper computer is avoided. In addition, the drawing mode of the image outline and the indication mark in the window is also set.
3. Demarcating a touch area
Firstly, an empty tuple is opened up to store all area variables, and then manually-defined interested areas are sequentially stored in the tuple by using a loop structure. And pressing a left mouse button in the image variable window to draw a rectangle to obtain the four-corner coordinates of the rectangle, pressing a right mouse button to finish drawing, and generating a rectangular area according to the four-corner coordinates of the drawn rectangle. After all the delimitations are completed, all the areas are combined together, and then the combined areas are used for extracting corresponding partial images.
(2) The touch area motion detection link comprises optical flow calculation and optical flow field analysis, and mainly completes detection on whether effective hand motion exists in the touch area, if so, the touch area motion enters the effective touch analysis link, and if not, the touch area motion detection link continuously detects again.
1. Optical flow computation
The optical flow tracking in HALCON is realized by adopting an operator optical _ flow _ mg, two continuous monochromatic images in time series need to be input, an image obtained from an infrared camera is an RGB color image, and a gray scale conversion needs to be carried out to meet the input requirement of the monochromatic images, wherein one monochromatic image is currently acquired, and the other monochromatic image is acquired last time. Since optical flow calculation takes a long time, image scaling processing is performed on an input image between calculations, with a scaling of 0.4, which can increase the algorithm processing speed on the premise of satisfying processing requirements. The operator parameter output is VectorField, which is the calculated optical flow field.
2. Accelerating optical flow field analysis and removing interference
The touch analysis of all the touch areas is finished in sequence by directly adopting a circulating mode, all the touch areas are calculated once, and the time consumption is long. By adopting a characteristic detection and selection method, the touch area which does not meet the condition is removed, and subsequent touch analysis is not carried out, so that the calculation speed is accelerated, and some interference is removed. The selection of the features is made from two aspects, namely the vector length of the optical flow field and the motion image area. The vector length of the optical flow field represents the motion change condition, and the interference of micro motion can be removed by setting a minimum threshold value. The area of the motion image is screened by the maximum value, after one-time touch optical flow calculation, several different motion areas may exist at the same time, and at the moment, noise interference can be removed by selecting according to the area of the maximum motion area.
(3) Analysis of touch results
1. Obtaining direction of motion
And after one-time shape conversion is carried out on the hand image area, the convexity parameter is selected to enable the area outline to be complete and smooth. And intercepting the optical flow field according to the area after the shape conversion, and calculating the motion direction by using the optical flow field. The conversion from an optical flow field to real numbers is realized by a HALCON operator vector _ field _ to _ real, a vector field image is converted into two real-value images, and output images respectively comprise vector components in the directions of rows and columns. Calculating the average value and deviation of the gray values by adopting an operator intensity so as to obtain the respective gray value average values of the row-column real value images, representing the final component vectors of the motion direction in the X-axis direction and the Y-axis direction by the two values, and representing the motion direction after synthesis.
2. Calculating fingertip coordinates
Firstly, the central coordinates of the selected moving target area need to be calculated and obtained by the calculation of the operator area _ center. Taking the center as a starting point, and taking the component vectors of the motion direction in the X-axis direction and the Y-axis direction as a ray. The component vectors are specially processed so that this ray always points to the inside of the image. And the intersection point of the ray and the contour of the selected motion area is the calculated fingertip coordinate position.
3. Calculating included angle value between motion vector and positive direction of X axis of image coordinate system
The image coordinate system takes the upper left corner of the image as the origin of coordinates, the vertical downward direction is the positive direction of the X axis, and the horizontal rightward direction is the positive direction of the Y axis. And translating the component vectors of the motion direction in the X-axis direction and the Y-axis direction to the coordinate origin of the image coordinate system, and obtaining an included angle between the motion vector and the positive direction of the X-axis of the image coordinate system by using an inverse cosine function, wherein the value range of the included angle is 0-6.28.
4. Calculating the included angle value between the connecting line of the center of the touch area and the origin of the image coordinate system and the positive direction of the X axis
A coordinate system is established by taking the center of the image as an origin, the vertical downward direction is the positive direction of an X axis, and the horizontal rightward direction is the positive direction of a Y axis. And obtaining the central coordinate of the touch area by the operator area _ center, and calculating an included angle between a connecting line of the central coordinate and the origin of the coordinate and the positive direction of the X axis by using an arcsine function.
5. Calculating a corrected angle value
And synthesizing and correcting the angles obtained in the previous two steps to obtain a new angle value. The value represents the type of touch action, and the press action has a forward component and the bounce action has a backward component, so that the value of the press action is between 1.57 and 4.71, and the value of the bounce action is between 0 and 1.57 or 4.71 and 6.28.
6. Displaying the processing result
Displaying graphical information such as an original image, a hand area outline, a motion vector direction, a touch area outline and the like on an image variable window through an operator dev _ display, and printing character information such as fingertip coordinate data, a motion vector angle value under an image coordinate system, a touch area center connecting line angle value, a corrected angle value, a currently effectively touched touch area number and the like on the image variable window through an operator disp _ message to finish the functions of debugging and result display.
(4) Touch recognition software design
1. Camera control module
The camera control module is composed of two button Click events and a timer Tick event, the Click event of opening the button of the camera completes the operation of clearing all variable structures, connecting the camera, setting a window and opening the enabling of the image acquisition timer, the Click event of closing the button of the camera completes the operation of closing the enabling of the image acquisition timer, closing the camera, closing the window and releasing resources, the period of the timer Tick event is set to be 50 milliseconds, and the task of image acquisition and display is executed in the event.
2. Demarcating area module
The region dividing module is composed of a Click event, the number of touch regions needs to be set before clicking the region dividing module, then the region dividing module clicks the region dividing button, the left button of the mouse is pressed down and does not release the region which can be drawn, and the left button releases the right button to Click to finish one touch region drawing until all the touch regions are drawn. The DrawRectangle1 function waits for the right click operation before proceeding to the next cycle. And after all the areas are drawn, the program closes the image acquisition timer, opens the image processing timer and enters a touch identification link.
3. Image processing module
The image processing module is realized by a timer event, the program is realized by C # dynamic link library recombination optimization derived by HALCON, and the final processing result is displayed in a TextBox control. In order to improve the stability and accuracy of touch recognition, continuous track analysis processing is added on the basis of the original HALCON visual algorithm. By optimizing the coordinate system, the value range of the pressing action is 0-3.14, and the value range of the bouncing action is-3.14-0. The continuous track analysis adopts median filtering, adds jitter elimination processing and stability detection, and realizes the whole touch process: transient steady state, rising edge, jitter, steady state, jitter, falling edge, transient steady state identification processing.
4. Communication module
The communication module and the image processing module share one timer event to realize the communication function with the VVV programming animation software. The method adopts a key event communication mode, different touch area pressing actions can trigger a unique key value, and touch information is transmitted through the key value. The VVV software end writes a corresponding key value receiving program, and different key values can trigger corresponding animation effects.
(5) Programming animation software design
1. Keyboard communication module
The keyboard communication module can receive the key value transmitted by the touch recognition software and output the key value in an ASCII code form corresponding to the key value. The keyboard communication mode can support at most 26 key values and corresponds to 26 interaction points, the communication mode is one-way, touch identification software serves as a sending end, and VVV software serves as a receiving end.
2. Picture processing module
The input parameters of each picture processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of the picture on the screen, the control parameters comprise opening the picture, closing the picture, a previous picture and a next picture, and the control parameters are controlled by key values transmitted by the keyboard communication module.
3. Video processing module
The input parameters of each video processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of the video on the screen, the control parameters comprise opening videos, closing videos, previous videos and next videos, and the control parameters are controlled by key values transmitted by the keyboard communication module. Since video processing has a problem of uncertain duration, feedback control of the playing duration needs to be added. And finally, the picture and the video are completed to coexist in the same window through a Group node.
Through the technical scheme, the invention has the beneficial effects that: the desktop touch detection is realized, corresponding pictures can be projected on the desktop according to different touch areas, and the function of projection interaction is realized. The projection interaction system can not only realize plane detection and projection, but also realize detection and projection of a curved surface. The algorithm flow is optimized, the processing speed is improved, and multi-point touch interaction is realized. The set of projection interaction system can realize good projection interaction effect after realizing balance of scene arrangement, animation content and touch recognition software configuration, the touch accuracy is over 90 percent, and the system delay is within 0.5 second.
The system test result shows that the touch identification accuracy and speed are higher than expected values, and a good projection interaction effect can be realized.
Drawings
FIG. 1 is a diagram of a system hardware architecture.
FIG. 2 is a diagram of a system software architecture.
Detailed Description
(1) And initializing the system, and finishing the tasks of one-time processing in the image processing program when the image processing program just starts to run, wherein the tasks comprise camera setting, window setting and touch area demarcation.
1. Camera arrangement
Connecting an infrared camera, reading a camera image by adopting an operator open _ frame marker, selecting Directshow by an interface, and configuring camera parameters according to defaults. In order to make the brightness of the image sufficient, a touch area is defined below, and the image energy is increased in a mode of circularly acquiring for multiple times.
2. Window arrangement
HALCON forms are used as an important way for man-machine interaction, and the results in the process of each stage of image processing can be displayed on the forms. The size of the current image is obtained through the operator get _ image _ size, so that the size of the opened image window is automatically adapted to the size of the image, and the problem that the image display range exceeds the display control range during the design of an upper computer is avoided. In addition, the drawing mode of the image outline and the indication mark in the window is also set.
3. Demarcating a touch area
Firstly, an empty tuple is opened up to store all area variables, and then manually-defined interested areas are sequentially stored in the tuple by using a loop structure. And pressing a left mouse button in the image variable window to draw a rectangle to obtain the four-corner coordinates of the rectangle, pressing a right mouse button to finish drawing, and generating a rectangular area according to the four-corner coordinates of the drawn rectangle. After all the delimitations are completed, all the areas are combined together, and then the combined areas are used for extracting corresponding partial images.
(2) The touch area motion detection link comprises optical flow calculation and optical flow field analysis, and mainly completes detection on whether effective hand motion exists in the touch area, if so, the touch area motion enters the effective touch analysis link, and if not, the touch area motion detection link continuously detects again.
1. Optical flow computation
The optical flow tracking in HALCON is realized by adopting an operator optical _ flow _ mg, two continuous monochromatic images in time series need to be input, an image obtained from an infrared camera is an RGB color image, and a gray scale conversion needs to be carried out to meet the input requirement of the monochromatic images, wherein one monochromatic image is currently acquired, and the other monochromatic image is acquired last time. Since optical flow calculation takes a long time, image scaling processing is performed on an input image between calculations, with a scaling of 0.4, which can increase the algorithm processing speed on the premise of satisfying processing requirements. The operator parameter output is VectorField, which is the calculated optical flow field.
2. Accelerating optical flow field analysis and removing interference
The touch analysis of all the touch areas is finished in sequence by directly adopting a circulating mode, all the touch areas are calculated once, and the time consumption is long. By adopting a characteristic detection and selection method, the touch area which does not meet the condition is removed, and subsequent touch analysis is not carried out, so that the calculation speed is accelerated, and some interference is removed. The selection of the features is made from two aspects, namely the vector length of the optical flow field and the motion image area. The vector length of the optical flow field represents the motion change condition, and the interference of micro motion can be removed by setting a minimum threshold value. The area of the motion image is screened by the maximum value, after one-time touch optical flow calculation, several different motion areas may exist at the same time, and at the moment, noise interference can be removed by selecting according to the area of the maximum motion area.
(3) Analysis of touch results
1. Obtaining direction of motion
And after one-time shape conversion is carried out on the hand image area, the convexity parameter is selected to enable the area outline to be complete and smooth. And intercepting the optical flow field according to the area after the shape conversion, and calculating the motion direction by using the optical flow field. The conversion from an optical flow field to real numbers is realized by a HALCON operator vector _ field _ to _ real, a vector field image is converted into two real-value images, and output images respectively comprise vector components in the directions of rows and columns. Calculating the average value and deviation of the gray values by adopting an operator intensity so as to obtain the respective gray value average values of the row-column real value images, representing the final component vectors of the motion direction in the X-axis direction and the Y-axis direction by the two values, and representing the motion direction after synthesis.
2. Calculating fingertip coordinates
Firstly, the central coordinates of the selected moving target area need to be calculated and obtained by the calculation of the operator area _ center. Taking the center as a starting point, and taking the component vectors of the motion direction in the X-axis direction and the Y-axis direction as a ray. The component vectors are specially processed so that this ray always points to the inside of the image. And the intersection point of the ray and the contour of the selected motion area is the calculated fingertip coordinate position.
3. Calculating included angle value between motion vector and positive direction of X axis of image coordinate system
The image coordinate system takes the upper left corner of the image as the origin of coordinates, the vertical downward direction is the positive direction of the X axis, and the horizontal rightward direction is the positive direction of the Y axis. And translating the component vectors of the motion direction in the X-axis direction and the Y-axis direction to the coordinate origin of the image coordinate system, and obtaining an included angle between the motion vector and the positive direction of the X-axis of the image coordinate system by using an inverse cosine function, wherein the value range of the included angle is 0-6.28.
4. Calculating the included angle value between the connecting line of the center of the touch area and the origin of the image coordinate system and the positive direction of the X axis
A coordinate system is established by taking the center of the image as an origin, the vertical downward direction is the positive direction of an X axis, and the horizontal rightward direction is the positive direction of a Y axis. And obtaining the central coordinate of the touch area by the operator area _ center, and calculating an included angle between a connecting line of the central coordinate and the origin of the coordinate and the positive direction of the X axis by using an arcsine function.
5. Calculating a corrected angle value
And synthesizing and correcting the angles obtained in the previous two steps to obtain a new angle value. The value represents the type of touch action, and the press action has a forward component and the bounce action has a backward component, so that the value of the press action is between 1.57 and 4.71, and the value of the bounce action is between 0 and 1.57 or 4.71 and 6.28.
6. Displaying the processing result
Displaying graphical information such as an original image, a hand area outline, a motion vector direction, a touch area outline and the like on an image variable window through an operator dev _ display, and printing character information such as fingertip coordinate data, a motion vector angle value under an image coordinate system, a touch area center connecting line angle value, a corrected angle value, a currently effectively touched touch area number and the like on the image variable window through an operator disp _ message to finish the functions of debugging and result display.
(4) Touch recognition software design
1. Camera control module
The camera control module is composed of two button Click events and a timer Tick event, the Click event of opening the button of the camera completes the operation of clearing all variable structures, connecting the camera, setting a window and opening the enabling of the image acquisition timer, the Click event of closing the button of the camera completes the operation of closing the enabling of the image acquisition timer, closing the camera, closing the window and releasing resources, the period of the timer Tick event is set to be 50 milliseconds, and the task of image acquisition and display is executed in the event.
2. Demarcating area module
The region dividing module is composed of a Click event, the number of touch regions needs to be set before clicking the region dividing module, then the region dividing module clicks the region dividing button, the left button of the mouse is pressed down and does not release the region which can be drawn, and the left button releases the right button to Click to finish one touch region drawing until all the touch regions are drawn. The DrawRectangle1 function waits for the right click operation before proceeding to the next cycle. And after all the areas are drawn, the program closes the image acquisition timer, opens the image processing timer and enters a touch identification link.
3. Image processing module
The image processing module is realized by a timer event, the program is realized by C # dynamic link library recombination optimization derived by HALCON, and the final processing result is displayed in a TextBox control. In order to improve the stability and accuracy of touch recognition, continuous track analysis processing is added on the basis of the original HALCON visual algorithm. By optimizing the coordinate system, the value range of the pressing action is 0-3.14, and the value range of the bouncing action is-3.14-0. The continuous track analysis adopts median filtering, adds jitter elimination processing and stability detection, and realizes the whole touch process: transient steady state, rising edge, jitter, steady state, jitter, falling edge, transient steady state identification processing.
4. Communication module
The communication module and the image processing module share one timer event to realize the communication function with the VVV programming animation software. The method adopts a key event communication mode, different touch area pressing actions can trigger a unique key value, and touch information is transmitted through the key value. The VVV software end writes a corresponding key value receiving program, and different key values can trigger corresponding animation effects.
(5) Programming animation software design
1. Keyboard communication module
The keyboard communication module can receive the key value transmitted by the touch recognition software and output the key value in an ASCII code form corresponding to the key value. The keyboard communication mode can support at most 26 key values and corresponds to 26 interaction points, the communication mode is one-way, touch identification software serves as a sending end, and VVV software serves as a receiving end.
2. Picture processing module
The input parameters of each picture processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of the picture on the screen, the control parameters comprise opening the picture, closing the picture, a previous picture and a next picture, and the control parameters are controlled by key values transmitted by the keyboard communication module.
3. Video processing module
The input parameters of each video processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of the video on the screen, the control parameters comprise opening videos, closing videos, previous videos and next videos, and the control parameters are controlled by key values transmitted by the keyboard communication module. Since video processing has a problem of uncertain duration, feedback control of the playing duration needs to be added. And finally, the picture and the video are completed to coexist in the same window through a Group node.
The system is tested next, the running time of the visual algorithm of the touch recognition system is far lower than the requirement of 0.5 second after the test, the rapid recognition can be realized, the accuracy of the touch recognition system is tested for 14 times, no error recognition exists, the projection interaction system is erected on a desktop, and three articles including a universal meter, a function generator and a welding table are placed on the desktop. When the hand touches the object, the related introduction can pop up, and the next switching can be realized by touching the projection picture.

Claims (3)

1. The invention discloses a projection interactive system design based on an image sensor, which comprises the following steps:
(1) initializing the system, completing the one-time processing task in the image processing program when the image processing program just starts to run, including camera setting, window setting and touch area dividing,
1. camera arrangement
Connecting an infrared camera, reading a camera image by adopting an operator open _ frame marker, selecting Directshow by an interface, configuring camera parameters according to defaults, preparing for defining a touch area below in order to ensure that the brightness of the image is enough, and improving the image energy by adopting a mode of circularly collecting for multiple times;
2. window arrangement
The HALCON window is used as an important way of man-machine interaction, results in the process of image processing in each stage can be displayed on the window, the size of the current image is obtained through an operator get _ image _ size, the size of the opened image window is automatically adapted to the size of the image, the problem that the image display range exceeds the display control range during the design of an upper computer is avoided, and in addition, the setting of the drawing modes of the image outline and the indication mark in the window is also finished;
3. demarcating a touch area
Firstly, opening an empty tuple for storing all regional variables, then sequentially storing manually-defined interested regions into tuples by using a circulating structure, pressing a left mouse button in an image variable window to draw a rectangle to obtain four-corner coordinates of the rectangle, pressing a right mouse button to finish drawing, generating a rectangular region according to the four-corner coordinates of the drawn rectangle, combining all the regions together after all the definitions are finished, and then extracting corresponding partial images by using the combined region;
(2) the touch area motion detection link comprises optical flow calculation and optical flow field analysis, and mainly completes the detection of whether effective hand motion exists in the touch area, if so, the touch area enters the effective touch analysis link, if not, the detection is continuously re-performed,
1. optical flow computation
Optical flow tracking in HALCON is realized by adopting an operator optical _ flow _ mg, two continuous monochromatic images on a time sequence are required to be input, an image obtained from an infrared camera is an RGB (red, green and blue) color image, gray scale conversion is required to be carried out for one time to meet the input requirement of the monochromatic image, one monochromatic image is collected currently, the other monochromatic image is collected last time, and the optical flow calculation consumes longer time, so that the input image is subjected to image scaling treatment during calculation, the scaling ratio is 0.4, the algorithm processing speed can be improved on the premise of meeting the processing requirement, and the operator parameter output is VectorField which is the calculated optical flow field;
2. accelerating optical flow field analysis and removing interference
The touch analysis of all touch areas is directly and sequentially completed in a circulating mode, the touch analysis of all touch areas can be calculated once, the time consumption is long, a method for detecting and selecting characteristics is adopted, the touch areas which do not meet the conditions are removed, the subsequent touch analysis is not performed, the calculation speed is accelerated, some interference is removed, the characteristics are selected from two aspects, namely the vector length of an optical flow field and the area of a motion image, the vector length of the optical flow field represents the motion change condition, the interference of micro motion can be removed by setting a minimum threshold value, the area of the motion image is screened by a maximum value, after one-time touch optical flow calculation, a plurality of different motion areas can exist at the same time, the noise interference can be removed according to the selection of the area of the maximum motion area,
(3) analysis of touch results
1. Obtaining direction of motion
The method comprises the following steps that a hand image area is subjected to shape conversion once, convexity parameters are selected to enable the outline of the area to be complete and smooth, an optical flow field is intercepted according to the area subjected to shape conversion, the motion direction is calculated through the optical flow field, conversion from the optical flow field to real numbers is achieved through a vector _ field _ to _ real of an HALCON operator, a vector field image is converted into two real value images, output images respectively comprise vector components in the row and column directions, the average value and deviation of gray values are calculated through the operator intensity, the respective gray value average values of the row and column real value images are obtained, the final motion direction component vectors in the X-axis direction and the Y-axis direction are represented through the two values, and the motion direction can be represented after synthesis;
2. calculating fingertip coordinates
Firstly, calculating the central coordinate of a selected motion target area, obtaining the central coordinate by calculating an operator area _ center, taking the center as a starting point, combining component vectors of the motion direction in the X-axis direction and the Y-axis direction to make a ray, and specially processing the component vectors to ensure that the ray always points to the inside of an image, wherein the intersection point of the ray and the outline of the selected motion area is the calculated fingertip coordinate position;
3. calculating included angle value between motion vector and positive direction of X axis of image coordinate system
The image coordinate system takes the upper left corner of an image as an original coordinate point, the vertical downward direction is the positive direction of an X axis, the horizontal rightward direction is the positive direction of a Y axis, component vectors in the X axis and the Y axis directions in the motion direction are translated to the original coordinate point of the image coordinate system, an included angle between the motion vector and the positive direction of the X axis of the image coordinate system can be obtained by utilizing an inverse cosine function, and the value range of the included angle is 0-6.28;
4. calculating the included angle value between the connecting line of the center of the touch area and the origin of the image coordinate system and the positive direction of the X axis
Establishing a coordinate system by taking the center of the image as an original point, taking the vertical downward direction as the positive direction of an X axis and the horizontal right direction as the positive direction of a Y axis, acquiring the central coordinate of the touch area by an operator area _ center, and calculating the included angle between the connecting line of the central coordinate and the original point of the coordinate and the positive direction of the X axis by utilizing an arcsine function;
5. calculating a corrected angle value
Synthesizing and correcting the angles obtained in the previous two steps to obtain a new angle value which represents the type of touch action, wherein the press action has a forward component, and the bounce action has a backward component, so that the press action has a value of 1.57-4.71, and the bounce action has a value of 0-1.57 or 4.71-6.28;
6. displaying the processing result
Displaying the original image, the outline of the hand area, the direction of the motion vector, the outline of the touch area and other graphic information on an image variable window through an operator dev _ display, printing the fingertip coordinate data, the angle value of the motion vector under the image coordinate system, the angle value of the central connecting line of the touch area, the corrected angle value, the number of the touch area which is currently effectively touched and other character information on the image variable window through an operator disp _ message to finish the functions of debugging and result display,
(4) touch recognition software design
1. Camera control module
The camera control module consists of two button Click events and a timer Tick event, wherein the Click event of opening the button of the camera completes the operations of clearing all variable structures, connecting the camera, setting a window and opening the enabling of an image acquisition timer, the Click event of closing the button of the camera completes the operations of closing the enabling of the image acquisition timer, closing the camera, closing the window and releasing resources, the period of the timer Tick event is set to be 50 milliseconds, and the tasks of image acquisition and display are executed in the event;
2. demarcating area module
The method comprises the following steps that a region defining module is composed of a button Click event, the number of touch areas needs to be set before clicking the defined region, then clicking the region defining button, pressing a left button of a mouse to not release the region which can be drawn, releasing a right button by the left button to finish one touch area drawing until all the touch areas are drawn, waiting for right button clicking operation by a DrawRectagle 1 function, then performing next circulation, and after all the regions are drawn, closing an image acquisition timer by a program, opening an image processing timer and entering a touch identification link;
3. image processing module
The image processing module is realized through a timer event, the program is realized through C # dynamic link library recombination optimization derived by HALCON, the final processing result is displayed in a TextBox control, in order to improve the stability and accuracy of touch identification, on the basis of the original HALCON visual algorithm, continuous track analysis processing is added, through optimizing a coordinate system, the value range of a pressing action is 0-3.14, the value range of a bouncing action is-3.14-0, the continuous track analysis adopts median filtering, jitter elimination processing is added, stability detection is realized, and the whole touch process is realized: temporary steady state, rising edge, shaking, steady state, shaking, falling edge and temporary steady state identification processing;
4. communication module
The communication module and the image processing module share a timer event to realize the communication function with the VVV programming animation software, wherein a key event communication mode is adopted, different touch area pressing actions can trigger a unique key value, touch information is transmitted through the key value, a corresponding key value receiving program is written at the VV software end, different key values can trigger corresponding animation effects,
(5) programming animation software design
1. Keyboard communication module
The keyboard communication module can receive key values transmitted by the touch recognition software and output the key values in an ASCII code form corresponding to the key values, a keyboard communication mode can support at most 26 key values and corresponds to 26 interaction points, the communication mode is unidirectional, the touch recognition software serves as a transmitting end, and the VVV software serves as a receiving end;
2. picture processing module
The input parameters of each picture processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of the picture on the screen, the control parameters comprise opening the picture, closing the picture, a previous picture and a next picture, and the control parameters are controlled by key values transmitted by the keyboard communication module;
3. video processing module
The input parameters of each video processing module comprise position parameters and control parameters, the position parameters are responsible for adjusting the coordinate position, the zooming size and the rotating angle of a video on a screen, the control parameters comprise video opening, video closing, a previous video and a next video, the control parameters are controlled by key value transmission of a keyboard communication module, due to the fact that the video processing has the problem that the duration is uncertain, feedback control of the playing duration needs to be added, and finally the picture and the video coexist in the same window through a Group node.
2. The image sensor-based projection interaction system design of claim 1, wherein: in step (2), the input image is subjected to image scaling processing at a scaling of 0.4.
3. The image sensor-based projection interaction system design of claim 1, wherein: in the step (3), the value of the pressing action is between 1.57 and 4.71, and the value of the bouncing action is between 0 and 1.57 or 4.71 and 6.28.
CN202010604944.5A 2020-06-29 2020-06-29 Projection interactive system design based on image sensor Pending CN111752456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010604944.5A CN111752456A (en) 2020-06-29 2020-06-29 Projection interactive system design based on image sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010604944.5A CN111752456A (en) 2020-06-29 2020-06-29 Projection interactive system design based on image sensor

Publications (1)

Publication Number Publication Date
CN111752456A true CN111752456A (en) 2020-10-09

Family

ID=72677875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010604944.5A Pending CN111752456A (en) 2020-06-29 2020-06-29 Projection interactive system design based on image sensor

Country Status (1)

Country Link
CN (1) CN111752456A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112987906A (en) * 2021-03-26 2021-06-18 北京小米移动软件有限公司 Method and device for reducing display power consumption
CN113442565A (en) * 2021-07-07 2021-09-28 南通大学 Screen printing plate-making projection system
CN114356140A (en) * 2021-12-31 2022-04-15 上海永亚智能科技有限公司 Key action identification method of infrared induction suspension key
CN115209113A (en) * 2021-04-14 2022-10-18 广州拓火科技有限公司 Projection system
CN115859414A (en) * 2023-02-27 2023-03-28 中国铁路设计集团有限公司 Cross-coordinate system use method for global scale geographic information base map
CN116965751A (en) * 2022-11-28 2023-10-31 开立生物医疗科技(武汉)有限公司 Endoscope moving speed detection method, device, electronic equipment and storage medium
CN117771664A (en) * 2024-01-03 2024-03-29 广州创一网络传媒有限公司 Interactive game projection method of self-adaptive projection surface

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112987906A (en) * 2021-03-26 2021-06-18 北京小米移动软件有限公司 Method and device for reducing display power consumption
CN112987906B (en) * 2021-03-26 2024-05-07 北京小米移动软件有限公司 Method and device for reducing display power consumption
CN115209113A (en) * 2021-04-14 2022-10-18 广州拓火科技有限公司 Projection system
CN115209113B (en) * 2021-04-14 2024-04-05 广州拓火科技有限公司 Projection system
CN113442565A (en) * 2021-07-07 2021-09-28 南通大学 Screen printing plate-making projection system
CN114356140A (en) * 2021-12-31 2022-04-15 上海永亚智能科技有限公司 Key action identification method of infrared induction suspension key
CN114356140B (en) * 2021-12-31 2024-02-13 上海永亚智能科技有限公司 Key action recognition method of infrared induction suspension key
CN116965751A (en) * 2022-11-28 2023-10-31 开立生物医疗科技(武汉)有限公司 Endoscope moving speed detection method, device, electronic equipment and storage medium
CN115859414A (en) * 2023-02-27 2023-03-28 中国铁路设计集团有限公司 Cross-coordinate system use method for global scale geographic information base map
CN115859414B (en) * 2023-02-27 2023-06-16 中国铁路设计集团有限公司 Global scale geographic information base map cross-coordinate system using method
CN117771664A (en) * 2024-01-03 2024-03-29 广州创一网络传媒有限公司 Interactive game projection method of self-adaptive projection surface
CN117771664B (en) * 2024-01-03 2024-06-07 广州创一网络传媒有限公司 Interactive game projection method of self-adaptive projection surface

Similar Documents

Publication Publication Date Title
CN111752456A (en) Projection interactive system design based on image sensor
TWI742079B (en) Gesture-based interactive method and device
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
US6624833B1 (en) Gesture-based input interface system with shadow detection
CN110503725B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN101539804A (en) Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen
CN106201173A (en) The interaction control method of a kind of user's interactive icons based on projection and system
CN102622108A (en) Interactive projecting system and implementation method for same
US20210072818A1 (en) Interaction method, device, system, electronic device and storage medium
CN102662500B (en) Method for controlling mouse pointer position based on multimedia projection system
CN102222342A (en) Tracking method of human body motions and identification method thereof
CN112668492A (en) Behavior identification method for self-supervised learning and skeletal information
US20170315609A1 (en) Method for simulating and controlling virtual sphere in a mobile device
CN111840920A (en) Upper limb intelligent rehabilitation system based on virtual reality
CN107894834A (en) Gesture identification method and system are controlled under augmented reality environment
CN107193384B (en) Switching method of mouse and keyboard simulation behaviors based on Kinect color image
TWI808321B (en) Object transparency changing method for image display and document camera
CN101071350B (en) Device for operating cursor, window by identifying dynamic trace
CN202584030U (en) Interactive projection system and shooting game equipment
CN111078008A (en) Control method of early education robot
Khan et al. Computer vision based mouse control using object detection and marker motion tracking
CN116301551A (en) Touch identification method, touch identification device, electronic equipment and medium
Liu et al. COMTIS: Customizable touchless interaction system for large screen visualization
CN208969679U (en) A kind of man-machine interactive system based on the identification of plane finger
CN113989834A (en) Intelligent magic mirror controlled by gestures based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201009

WD01 Invention patent application deemed withdrawn after publication