CN106408577B - Continuous frame connected domain parallel marking method for projection interactive system - Google Patents

Continuous frame connected domain parallel marking method for projection interactive system Download PDF

Info

Publication number
CN106408577B
CN106408577B CN201610840257.7A CN201610840257A CN106408577B CN 106408577 B CN106408577 B CN 106408577B CN 201610840257 A CN201610840257 A CN 201610840257A CN 106408577 B CN106408577 B CN 106408577B
Authority
CN
China
Prior art keywords
sub
lines
pixel sub
marking
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610840257.7A
Other languages
Chinese (zh)
Other versions
CN106408577A (en
Inventor
邓宏平
汪俊锋
栾庆磊
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Wisdom Gold Tong Technology Co Ltd
Original Assignee
Anhui Wisdom Gold Tong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Wisdom Gold Tong Technology Co Ltd filed Critical Anhui Wisdom Gold Tong Technology Co Ltd
Priority to CN201610840257.7A priority Critical patent/CN106408577B/en
Publication of CN106408577A publication Critical patent/CN106408577A/en
Application granted granted Critical
Publication of CN106408577B publication Critical patent/CN106408577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention relates to a continuous frame connected domain parallel marking method for a projection interactive system, which comprises the following steps: the method comprises the steps of carrying out blocking marking on a first frame image to obtain an image video, marking a connected region of the first frame image of the image video to obtain a connected region marking map corresponding to a sub-line; acquiring new pixels and lost pixels, calculating a current frame image and a previous frame image by using a frame difference method to obtain a difference detection image of the current frame image and the previous frame image, carrying out block marking on the frame difference image, and obtaining lost pixel sub-lines and new pixel sub-lines in the current frame image through sub-line fusion between different blocks; and finishing the correction of the marking result of the previous frame image by processing the missing pixel sub-lines and the new pixel sub-lines, and integrating the correction result into the marking result of the current frame image. The invention can rapidly complete the marking function of the connected region in the binary image when processing the continuous frame images. The speed is higher when continuous frame image marking is carried out, and the speed of marking the connected region is improved.

Description

Continuous frame connected domain parallel marking method for projection interactive system
Technical Field
The invention relates to the technical field of computer vision processing, in particular to a parallel marking method for a continuous frame connected domain of a projection interaction system.
Background
Human-computer interaction technology is a crucial field in computer science, and the development history thereof basically represents the development of computers. The development of human-computer interaction technology is increasing at a rapid pace, from the relatively clumsy switching systems on the earliest mainframes to the advent of mature keyboards and mice, and the now very popular touch screens. In recent years, due to the rapid development of computer vision technology and the appearance of new sensors, various portable human-computer interaction modes are developed.
However, because the depth cameras such as the Kinect are obviously insufficient in distance accuracy and spatial resolution, the depth cameras such as the Kinect can not be directly interacted with a computer to be witnessed by a user for fine operation, and the user can not directly perform convenient and sensitive operation in a display area projected on a wall by a projector, especially double-click operation with high requirements on spatial accuracy and time accuracy.
The system formed by combining the light pen and the camera is used for operating the projected image, so that the aim of operating the computer is fulfilled. In order to acquire the current position of the finger or the light pen, the image captured by the camera needs to be analyzed, foreground pixels are extracted from the image, and connected region analysis is performed, so that the detection and the positioning of the finger or the light pen are completed.
The traditional communication marking method is carried out aiming at a single frame image, when a camera is used for carrying out real-time detection, the difference between two adjacent frame images is very small, if each frame image is completely marked again, a large amount of repeated calculation is needed, the efficiency is not high, and the smooth influence on a projection system is great.
Disclosure of Invention
The invention aims to provide a continuous frame connected domain parallel marking method for a projection interactive system, so as to realize the quick marking of continuous frame connected domains.
In order to achieve the purpose, the invention adopts the following technical scheme:
a continuous frame connected domain parallel marking method for a projection interactive system comprises the following steps:
(1) the method comprises the steps of carrying out blocking marking on a first frame image to obtain an image video, and marking a connected region of the first frame image of the image video to obtain a marking image corresponding to a sub-line;
(2) acquiring new pixels and lost pixels, calculating a current frame image and a previous frame image by using a frame difference method to obtain a difference detection image of the current frame image and the previous frame image, carrying out block marking on the frame difference image, and obtaining lost pixel sub-lines and new pixel sub-lines in the current frame image through sub-line fusion between different blocks;
(3) and finishing the correction of the marking result of the previous frame image by processing the missing pixel sub-lines and the new pixel sub-lines, and integrating the correction result into the marking result of the current frame image.
In the method for marking the continuous frame connected domain in parallel for the projection interactive system, in the step (1), the first frame image is marked in blocks to obtain the image video, and the connected domain of the first frame image of the image video is marked to obtain the marked graph corresponding to the sub-line, and the method specifically comprises the following steps:
(11) detecting each subblock by utilizing a thread;
(12) sorting the sequence numbers of the sub-lines by calculating the maximum number of the sub-lines in each sub-block;
(13) restoring the starting point and the end point of the sub-line according to the positions of the blocks in the image;
(14) analyzing all the sub-rows in sequence according to the sequence numbers, and fusing the connected sub-rows;
(15) and constructing a relational graph according to the connection relation of the fusion sub-lines to obtain a connected domain labeled graph.
In the method for parallel labeling of continuous frame connected domains for a projection interactive system, in step (15), the relationship diagram is constructed according to the connection relationship of the fused sub-lines to obtain the connected domain labeled diagram, and the method specifically comprises the following steps:
(151) allocating a thread to each sub-row;
(152) searching all the subrows in the next row of the position of each subrow, comparing the positions of the starting point and the end point with each other, analyzing whether the subrows are connected or not, and if the subrows are connected, establishing a connection between corresponding nodes to complete the construction of the relational graph;
(153) and scanning the relation graph to obtain a connected region label graph.
In the method for marking the continuous frame connected domain in parallel for the projection interactive system, in the step (3), the correction of the marking result of the previous frame image is completed by processing the missing pixel sub-line and the new pixel sub-line, and the correction result is blended into the marking result of the current frame image, which specifically comprises the following steps:
(31) allocating a thread for each vanishing pixel sub-line;
(32) analyzing the breakage, shortening and disappearance of the original pixel sub-rows caused by the disappeared pixel sub-rows in the rows where the disappeared pixel sub-rows are located, judging the disappeared pixel sub-rows which change by comparing the positions of the starting points and the end points of the disappeared pixel sub-rows with the positions of the starting points and the end points of the original pixel sub-rows, and directly deleting the disappeared original pixel sub-rows from the relational graph without modifying the connection information in the pixel sub-rows related to the disappeared original pixel sub-rows;
(33) for each new pixel sub-row, a thread is assigned.
(34) Analyzing the fusion, side length and new appearance of the sub-rows caused by the new pixel sub-rows in the rows where the new pixel sub-rows are located, judging the new pixel sub-rows which change by comparing the positions of the starting points and the end points of the new pixel sub-rows with the positions of the starting points and the end points of the original pixel sub-rows, and adding the new pixel sub-rows into the relational graph without modifying the connection information in the pixel sub-rows related to the new pixel sub-rows;
(35) recording the serial numbers of all the changed disappeared pixel sub-lines, the new pixel sub-lines and the related disappeared pixel sub-lines and new pixel sub-lines;
(36) and allocating a thread for each changed disappeared pixel sub-line and new pixel sub-line in the relational graph, removing the original connection information, searching the possible connection pixel sub-line in the next line, modifying the connection information, and completing the modification of the marking information of the previous frame.
According to the technical scheme, the continuous frame connected domain parallel marking method for the projection interaction system can rapidly complete the marking function of the connected domain in the binary image when the continuous frame image is processed. The speed is higher when continuous frame image marking is carried out, and the speed of marking the connected region is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic illustration of the present invention in a sub-integration;
FIG. 3 is a first frame foreground image of the present invention;
fig. 4 is a current frame image of the present invention.
Fig. 5 is a frame difference diagram of fig. 3 and 4.
FIG. 6 is a scan analysis diagram of FIG. 5;
FIG. 7 is a schematic representation of the connected domain labeling of FIG. 3.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
a continuous frame connected domain parallel marking method for a projection interactive system comprises the following steps:
s1: the method comprises the following steps of carrying out block marking on a first frame image to obtain an image video, marking a connected region of the first frame image of the image video to obtain a marking image corresponding to a sub-line:
the connected component labeling method in this patent is expressed by using a graph in which a subline is used as a basic unit. Therefore, when the image is processed in a blocking mode, one line can be used as a subblock to be divided, and each thread processes one line of image, so that the data consistency can be guaranteed, a production line is facilitated, and the method is suitable for the characteristic that an algorithm is expressed in a line unit. If the number of the GPU threads is enough, one line can be divided into multiple sections, and the parallelism is further improved.
As shown in fig. 2, a row of 20 pixels is divided into 4 sub-blocks for parallel processing, and the vertical lines of the black bold lines indicate the dividing lines between adjacent blocks. In fig. 2, a white pixel represents a foreground pixel. There are 3 subrows in total in this line, but since the first subrow becomes 2 segments and the second subrow becomes 3 segments due to the blocking, it is necessary to perform the subrow fusion on the result of the blocking (this task is processed by the first thread).
Four threads are utilized to carry out sub-row detection on four sub-blocks, and the specific process is as follows:
s11: each thread is detected independently, and the following sub-line results are obtained:
sub-block 0: [0:0:3,4]
Sub-block 1: [0:0:0,0],[1:0:4,4]
Sub-block 2: [0:0:0,4]
Sub-block 3: [0:0:0,0],[1:0:4,4]
S12: and (4) sequence number arrangement:
in order to prevent the sub-row sequence numbers between different sub-blocks from being repeated, it is necessary to calculate the maximum number of sub-rows in each sub-block (for example, the sub-block size is 1 × 5, and there are at most 3 sub-rows), and then add 3 × N to the sub-row sequence number of the nth (counted from 0) sub-block.
Sub-block 0: [0:0:3,4]
Sub-block 1: [3:0:0,0],[4:0:4,4]
Sub-block 2: [6:0:0,4]
Sub-block 3: [9:0:0,0],[10:0:4,4]
S13: and (4) sorting the positions of the subrows:
and simultaneously, restoring the starting point and the end point of the sub-line according to the positions of the blocks in the image. I.e. 5 x N (5 being the width of each sub-block) is added to both the start and end of a sub-row. The sub-line scan results of fig. 2 at this time are as follows:
sub-block 0: [0:0:3,4]
Sub-block 1: [3:0:5,5],[4:0:9,9]
Sub-block 2: [6:0:10,14]
Sub-block 3: [9:0:15,15],[10:0:19,19]
S14: child-to-child fusion
And analyzing all the sub-lines in sequence according to the sequence numbers to see whether the two adjacent sub-lines are connected or not. If they meet, they are merged into one. The merging is achieved by modifying the end position of the previous subrow. The information of the subrows after final fusion is as follows:
[0:0:3,5],[4:0:9,15],[10:0:19,19]
in order to prevent the problem of overlapping of the sub-line sequence numbers of different lines, the merged sub-line sequence numbers also need to be sorted. If the maximum number of sub-rows in a row is M, and it is currently the y-th row (counting from 0), then the sequence number of each sub-row needs to be added by y M.
S15: and (3) constructing a relation graph:
after all the lines in the image are processed in parallel, the connection relationship of the sub-lines between adjacent lines needs to be analyzed, so that a relationship graph is completed, and a result of connected domain marking is finally obtained. The detailed process of this process is described below by taking fig. 3 as an example:
one thread is first allocated for each sub-row, for a total of 6 sub-rows in fig. 3 (the last row does not participate in the active search), so 6 threads need to be allocated. And searching all the sub-lines in the next line of the position of each sub-line, and analyzing whether the sub-lines are connected or not by comparing the positions of the starting point and the ending point of each sub-line. If so, establishing contact between the corresponding nodes. (the last row has no next row and no need to search for contiguous subrows). Each sub-row only needs to search the position of the next row, and the construction of the whole relational graph is completed by the comprehensive result of all threads. As shown in FIG. 3, sub-line 0 finds sub-line 2, sub-line 1 finds sub-line 3, and both sub-lines 2 and 3 find sub-line 5, so that the first connected domain graph is constructed. Similarly, sub-line 4 finds sub-line 7 and sub-line 8, and sub-line 6 finds sub-line 10. In one search period, each sub-row only performs few operations, but all threads work separately and cooperate to complete the construction of the relation graph in the full graph range quickly. And scanning the relation graph to obtain a final connected domain marking result, as shown in fig. 4.
S2: acquiring new pixels and lost pixels, calculating a current frame image and a previous frame image by using a frame difference method to obtain a difference detection image of the current frame image and the previous frame image, carrying out block marking on the frame difference image, and obtaining lost pixel sub-lines and new pixel sub-lines in the current frame image through sub-line fusion between different blocks;
all the blocking strategies when calculating the sub-line formed by the missing pixels in the frame difference image and the sub-line formed by the new pixels are consistent with the blocking strategy when the connected domain blocks in the first frame are marked. One line is divided as a sub-block and each thread processes one line of the image. If the number of the threads of the GPU is enough, one line is also divided into multiple sections, so that the parallelism is improved. The final result can be obtained by completing the calculation of the frame difference map in each block and then fusing the frame difference maps of all the blocks of the whole map.
Detection and fusion of subrows: the strategy is also consistent with the blocking strategy when the first frame connected domain block marking, when detecting missing pixel sub-rows and new pixel sub-rows, and fusing sub-rows between different blocks. The sub-rows located in different blocks but following each other are merged according to the starting and ending positions. As shown in fig. 6, the frame difference map of the current frame is the result of marking the missing pixel sub-lines and the new pixel sub-lines. Through the process, the vanishing pixel sub-row and the new pixel sub-row in the current frame image of the full image range can be obtained.
S3: and finishing the correction of the marking result of the previous frame image by processing the missing pixel sub-lines and the new pixel sub-lines, and integrating the correction result into the marking result of the current frame image. The method comprises the following steps:
s31: allocating a thread for each vanishing pixel sub-line;
s32: analyzing the breakage, shortening and disappearance of the original pixel sub-rows caused by the disappeared pixel sub-rows in the rows where the disappeared pixel sub-rows are located, judging the disappeared pixel sub-rows which change by comparing the positions of the starting points and the end points of the disappeared pixel sub-rows with the positions of the starting points and the end points of the original pixel sub-rows, and directly deleting the disappeared original pixel sub-rows from the relational graph without modifying the connection information in the pixel sub-rows related to the disappeared original pixel sub-rows;
s33: for each new pixel sub-row, a thread is assigned.
S34: analyzing the fusion, side length and new appearance of the sub-rows caused by the new pixel sub-rows in the rows where the new pixel sub-rows are located, judging the new pixel sub-rows which change by comparing the positions of the starting points and the end points of the new pixel sub-rows with the positions of the starting points and the end points of the original pixel sub-rows, and adding the new pixel sub-rows into the relational graph without modifying the connection information in the pixel sub-rows related to the new pixel sub-rows;
s35: recording the serial numbers of all the changed disappeared pixel sub-lines, the new pixel sub-lines and the related disappeared pixel sub-lines and new pixel sub-lines;
s36: and allocating a thread for each changed disappeared pixel sub-line and new pixel sub-line in the relational graph, removing the original connection information, searching the possible connection pixel sub-line in the next line, modifying the connection information, and completing the modification of the marking information of the previous frame.
The algorithm is applied to fig. 6, and the specific process is as follows:
threads are allocated for the 4 missing subrows in the figure, subrow 11, subrow 12, subrow 13, and subrow 14. The atom row associated with sub-row 11 is searched in row 0, sub-row 0 is found to have an overlap with sub-row 11, and the position of sub-row 0 is modified. Similarly, atom line 5 and sub-line 7 also need to be modified in position. The sub-row 9 disappears. Threads are allocated for the 4 newly appearing subrows in the figure, subrows 15, 16, 17, 18. The rows are searched for atomic rows that overlap the newly-appearing sub-row in position, and it is found that sub-row 1 needs to be modified in position, and sub-rows 16, 17, 18 all belong to the newly-appearing sub-row. For the changed subrows, i.e., subrows 0, 5, 7, 9,1, 16, 17, 18, their associated subrows, i.e., subrows 3,4, 10, are found in the original relationship diagram. Threads are assigned to all of these subrows. For each variable sub-row and the related sub-row thereof, firstly, the reserved connection information of the variable sub-row is removed, then, the connected sub-row is found in the next row, connection is established, and the relation graph is modified. If the sub-row is located in the last row, no processing is required. And (5) completing the algorithm to finally obtain 4 connected domains.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (3)

1. A continuous frame connected domain parallel marking method for a projection interactive system is characterized in that: the method comprises the following steps:
(1) the method comprises the steps of carrying out blocking marking on a first frame image to obtain an image video, marking a connected region of the first frame image of the image video to obtain a connected region marking map corresponding to a sub-line;
(2) acquiring new pixels and lost pixels, calculating a current frame image and a previous frame image by using a frame difference method to obtain frame difference images of the current frame image and the previous frame image, carrying out block marking on the frame difference images, and obtaining lost pixel sub-lines and new pixel sub-lines in the current frame image through sub-line fusion between different blocks;
(3) the method comprises the following steps of completing the correction of a marking result of a previous frame image by processing lost pixel subrows and newly generated pixel subrows, and integrating the correction result into the marking result of a current frame image, wherein the method specifically comprises the following steps:
(31) allocating a thread for each vanishing pixel sub-line;
(32) analyzing the breakage, shortening and disappearance of the original pixel sub-lines caused by the disappeared pixel sub-lines in the lines where the disappeared pixel sub-lines are located, judging the disappeared pixel sub-lines which change by comparing the positions of the starting points and the end points of the disappeared pixel sub-lines with the positions of the starting points and the end points of the original pixel sub-lines, and directly deleting the disappeared original pixel sub-lines from the relational graph;
(33) allocating a thread for each new pixel sub-row;
(34) analyzing the fusion, side length and new appearance of the sub-rows caused by the new pixel sub-rows in the rows where the new pixel sub-rows are located, judging the new pixel sub-rows which change by comparing the positions of the starting points and the end points of the new pixel sub-rows with the positions of the starting points and the end points of the original pixel sub-rows, and adding the new pixel sub-rows into the relational graph;
(35) recording the serial numbers of all the changed disappeared pixel sub-lines, the new pixel sub-lines and the related disappeared pixel sub-lines and new pixel sub-lines;
(36) and allocating a thread for each changed disappeared pixel sub-line and new pixel sub-line in the relational graph, removing the original connection information, searching the possible connection pixel sub-line in the next line, modifying the connection information, and completing the modification of the marking information of the previous frame.
2. The method of claim 1, wherein the method comprises the following steps: in the step (1), the blocking marking is performed on the first frame image to obtain an image video, and the connected region of the first frame image of the image video is marked to obtain a connected region mark map corresponding to the sub-line, specifically including the following steps:
(11) detecting each subblock by utilizing a thread;
(12) sorting the sequence numbers of the sub-lines by calculating the maximum number of the sub-lines in each sub-block;
(13) restoring the starting point and the end point of the sub-line according to the positions of the blocks in the image;
(14) analyzing all the sub-rows in sequence according to the sequence numbers, and fusing the connected sub-rows;
(15) and constructing a relational graph according to the connection relation of the fusion sub-lines to obtain a connected domain labeled graph.
3. The continuous frame connected domain parallel labeling method for a projection interactive system as claimed in claim 2, characterized in that: in the step (15), the relationship diagram is constructed according to the connection relationship of the fusion sub-rows to obtain the connected domain labeled diagram, which specifically comprises the following steps:
(151) allocating a thread to each sub-row;
(152) searching all the subrows in the next row of the position of each subrow, comparing the positions of the starting point and the end point with each other, analyzing whether the subrows are connected or not, and if the subrows are connected, establishing a connection between corresponding nodes to complete the construction of the relational graph;
(153) and scanning the relation graph to obtain a connected region label graph.
CN201610840257.7A 2016-09-21 2016-09-21 Continuous frame connected domain parallel marking method for projection interactive system Active CN106408577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610840257.7A CN106408577B (en) 2016-09-21 2016-09-21 Continuous frame connected domain parallel marking method for projection interactive system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610840257.7A CN106408577B (en) 2016-09-21 2016-09-21 Continuous frame connected domain parallel marking method for projection interactive system

Publications (2)

Publication Number Publication Date
CN106408577A CN106408577A (en) 2017-02-15
CN106408577B true CN106408577B (en) 2019-12-31

Family

ID=57997968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610840257.7A Active CN106408577B (en) 2016-09-21 2016-09-21 Continuous frame connected domain parallel marking method for projection interactive system

Country Status (1)

Country Link
CN (1) CN106408577B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801930A (en) * 2005-12-06 2006-07-12 南望信息产业集团有限公司 Dubious static object detecting method based on video content analysis
JP2007164804A (en) * 2007-01-22 2007-06-28 Asia Air Survey Co Ltd Mobile object detecting system, mobile object detecting device, mobile object detection method and mobile object detecting program
CN101727654A (en) * 2009-08-06 2010-06-09 北京理工大学 Method realized by parallel pipeline for performing real-time marking and identification on connected domains of point targets
CN102194232A (en) * 2011-05-23 2011-09-21 西安理工大学 Layering-guided video image target segmenting method
CN103295238A (en) * 2013-06-03 2013-09-11 南京信息工程大学 ROI (region of interest) motion detection based real-time video positioning method for Android platform
CN105931267A (en) * 2016-04-15 2016-09-07 华南理工大学 Moving object detection and tracking method based on improved ViBe algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801930A (en) * 2005-12-06 2006-07-12 南望信息产业集团有限公司 Dubious static object detecting method based on video content analysis
JP2007164804A (en) * 2007-01-22 2007-06-28 Asia Air Survey Co Ltd Mobile object detecting system, mobile object detecting device, mobile object detection method and mobile object detecting program
CN101727654A (en) * 2009-08-06 2010-06-09 北京理工大学 Method realized by parallel pipeline for performing real-time marking and identification on connected domains of point targets
CN102194232A (en) * 2011-05-23 2011-09-21 西安理工大学 Layering-guided video image target segmenting method
CN103295238A (en) * 2013-06-03 2013-09-11 南京信息工程大学 ROI (region of interest) motion detection based real-time video positioning method for Android platform
CN105931267A (en) * 2016-04-15 2016-09-07 华南理工大学 Moving object detection and tracking method based on improved ViBe algorithm

Also Published As

Publication number Publication date
CN106408577A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN108475431B (en) Image processing apparatus, image processing system, image processing method, and recording medium
US9830736B2 (en) Segmenting objects in multimedia data
CN102419663B (en) A kind of infrared touch screen multi-point recognition method and system
US9982995B2 (en) 3D scanner using structured lighting
TW201621584A (en) Apparatus and method for inspection of touch panel
US10261966B2 (en) Video searching method and video searching system
CN103034833B (en) Bar code positioning method and bar code detection device
CN109542276B (en) Touch point identification method and device and display equipment
CN113011258A (en) Object monitoring and tracking method and device and electronic equipment
CN111967490A (en) Model training method for map detection and map detection method
CN112184837A (en) Image detection method and device, electronic equipment and storage medium
Biswas et al. HALSIE: Hybrid Approach to Learning Segmentation by Simultaneously Exploiting Image and Event Modalities
CN106408577B (en) Continuous frame connected domain parallel marking method for projection interactive system
CN110737417B (en) Demonstration equipment and display control method and device of marking line of demonstration equipment
CN105049706A (en) Image processing method and terminal
CN112233139A (en) System and method for detecting motion during 3D data reconstruction
JP2011087144A (en) Telop character area detection method, telop character area detection device, and telop character area detection program
CN103634545A (en) Method for processing event of projector using pointer and an electronic device thereof
CN108269260B (en) Dynamic image back removing method, system and computer readable storage device
CN102131078B (en) Video image correcting method and system
CN105930813B (en) A method of detection composes a piece of writing this under any natural scene
CN108428241A (en) The movement locus catching method of mobile target in HD video
CN109977740B (en) Depth map-based hand tracking method
JP2008123090A (en) Camera pointer device, labeling method and program
CN113516621A (en) Liquid detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 230000 Yafu Park, Juchao Economic Development Zone, Chaohu City, Hefei City, Anhui Province

Patentee after: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd.

Address before: 102, room 602, C District, Hefei National University, Mount Huangshan Road, 230000 Hefei Road, Anhui, China

Patentee before: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder