CN107240110A - Projection mapping region automatic identifying method based on machine vision technique - Google Patents
Projection mapping region automatic identifying method based on machine vision technique Download PDFInfo
- Publication number
- CN107240110A CN107240110A CN201710415053.3A CN201710415053A CN107240110A CN 107240110 A CN107240110 A CN 107240110A CN 201710415053 A CN201710415053 A CN 201710415053A CN 107240110 A CN107240110 A CN 107240110A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- profile
- projection mapping
- machine vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
The present invention relates to a kind of projection mapping region automatic identifying method based on machine vision technique, comprise the following steps:The coloured image of input is converted into gray-scale map by S1 first, and binary conversion treatment is carried out afterwards, the suitable threshold value by setting, and by image each pixel, with 0 or 1, the two numerical value are replaced, tentatively to distinguish foreground and background;S2 uses the binary map that previous step is exported, and carries out polygon approach to the profile of shape, exports the apex coordinate of the contour line in units of object;S3 carries out classification to profile and determines whether the object or model that need to project;If NO, the apex coordinate of this object is deleted;Projected image is exported in the region that S4 is linked to be on summit.Advantages of the present invention is embodied in:Can quick detection go out view field, reset drop shadow spread, in real time more orthographic view and deviation in kind.Projection mapping is not limited only to stationary body, and the profile of dynamic object can be also caught in, so as to realize that the live fluoroscopic of object in motion is rendered.
Description
Technical field
The present invention relates to a kind of projection mapping technology, and in particular to a kind of projection mapping region based on machine vision technique
Automatic identifying method.
Background technology
As information content increases, the presentation mode increasingly diversification of information.The development of information technology so that virtual existing
Real, augmented reality is implemented as possibility.Before the birth of this kind of technology, the real world residing for us is difficult to be presented super
The more information of the time and space.But by virtual technology and analog simulation, any information can be superimposed to real world, use is allowed
Family perceives abundant visual information, so as to reach the sensory experience of exceeding reality.The target of virtual reality is by physical world
Information and virtual world information slitless connection, realize mapping one by one from atom to bit.Current augmented reality project
It is many to be realized by Helmet Mounted Display and glasses, such as Oculus Rift augmented realities display and Google glass.But this kind of set
Standby imaging effect has to be hoisted.Moreover, user, which wears, easily produces visual fatigue after certain time.Another augmented reality scheme is
Using smart mobile phone, the image for calling camera to catch, and the information of displaying needed for being rendered on image.
But for the augmented reality project of public place, it is difficult to require that participant uses Unified Device.Therefore, in material object
Upper projection turns into the third selection for realizing augmented reality.Projection mapping, also known as space augmented reality.Using shadow casting technique,
We can be by image or data projection to those objects in irregular shape.Using related software, it can export and object
The adaptable image of plane.
Existing projection mapping technical scheme needs the region of projection, it is necessary to which user is drawn out using drawing instrument, and
Opening projecting apparatus is needed with the naked eye to position while drafting.Drawing instrument can provide the user basic geometric figure, and user can be with
Change the size and angle of figure by pulling.For object in irregular shape, user needs to use polygon tool to draw
Profile.
Prior art the disadvantage is that, the view field that user manually sets is likely to misfit with profile in kind.
When projecting apparatus or modal position occur slight mobile, the region of user's hand drawn is just no longer accurate.And current programme is not
It can be the deviation of user feedback view field, and correct drop shadow spread automatically.It is more automatic and intelligence it is therefore desirable to application
Region recognition technology, the step of instead of hand drawn view field.
The content of the invention
The purpose of the present invention is that there is provided a kind of projection mapping based on machine vision technique for deficiency of the prior art
Region automatic identifying method, change, the spectral discrimination view field caught using camera can be adapted to automatically.Can quick detection
Go out view field, reset drop shadow spread, in real time the deviation of more orthographic view and material object.Also, projection mapping is not only limited
In stationary body, the profile of dynamic object can be also caught in, so as to realize that the live fluoroscopic of object in motion is rendered.
To achieve the above object, the invention discloses following technical scheme:
Projection mapping region automatic identifying method based on machine vision technique, comprises the following steps:
The coloured image of input is converted into gray-scale map by S1 first, and binary conversion treatment is carried out afterwards, suitable by setting
Threshold value, by image each pixel, with 0 or 1, the two numerical value are replaced, tentatively to distinguish foreground and background;
S2 uses the binary map that previous step is exported, and carries out polygon approach to the profile of shape, exports in units of object
Contour line apex coordinate;
S3 classifies to profile, determines whether the object or model for needing to project;If NO, this object is deleted
Apex coordinate;
Projected image is exported in the region that S4 is linked to be on summit.
Further, in the step S1, during preliminary differentiation foreground and background, gray-scale map FT [i, j] is set to, T is set
Threshold value, meets equation below:
Further, in the step S2, when being sampled to curve, if two summit X of curve1With X2, their seat
It is designated as X1=(x1,y1),X2=(x2,y2), now, in X1With X2Find point X in centre0=(x0,y0) position, coordinate pass through calculate
Distance is obtained:
In formula, d is intermediate point X0Apart from starting point X1Distance, take a point again between each two point, repeat this step
Suddenly, obtain being fitted the polygonal profile of arbitrary objects profile.
Further, in the step S2, when carrying out polygon approach, with minimum segmented fitting curve;Curve is entered
Row sampling, takes limited point on curve, and these points are linked up, and obtains being fitted the polygon of arbitrary objects profile.
Further, in the step S2, when carrying out polygon approach, all it is fitted using broken line, broken line enters
Row sampling, takes limited point on broken line, and these points are linked up, and obtains being fitted the polygon of arbitrary objects profile.
Further, in the step S3, by comparing the length ratio of each object vertex line, realization is entered to profile
Row classification.
Further, in the step S4, during output projected image, the section sets outside region are black.
Projection mapping region automatic identifying method disclosed by the invention based on machine vision technique, with following beneficial effect
Really:
1. view field's automatic identification technology instead of the step of needing to manually set view field originally, it is particularly suitable for use in
The projection model being on the move with angle.When model changes in camera shooting area, the technology can be quick
Physical quantities in identification region, shape, and automatically generate view field fundamentally avoids the inclined of projection and position generation in kind
Difference.
2. automatic identification view field, is the possibility that projection mapping opens whole new set of applications.Any change projection model
Position, view field can be calculated in real time, that is, mean real-time rendering material object on the move.This will significantly lift projection mapping
Expressive force.
Brief description of the drawings
Fig. 1 is inventive algorithm flow chart.
Embodiment
The technical scheme in the embodiment of the present invention will be clearly and completely described below, it is clear that described implementation
Example only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is common
The every other embodiment that technical staff is obtained under the premise of creative work is not made, belongs to the model that the present invention is protected
Enclose.
The core of the present invention is to provide a kind of projection mapping region automatic identifying method based on machine vision technique, can
It is automatic to adapt to change, the spectral discrimination view field caught using camera.Can quick detection go out view field, reset and throw
The deviation of shadow scope, in real time more orthographic view and material object.Also, projection mapping is not limited only to stationary body, dynamic object
Profile can be also caught in, so as to realize that the live fluoroscopic of object in motion is rendered.
Refer to Fig. 1.Projection mapping region automatic identifying method based on machine vision technique, comprises the following steps:
The coloured image of input is converted into gray-scale map by S1 first, and binary conversion treatment is carried out afterwards, suitable by setting
Threshold value, by image each pixel, with 0 or 1, the two numerical value are replaced, tentatively to distinguish foreground and background;
S2 uses the binary map that previous step is exported, and carries out polygon approach to the profile of shape, exports in units of object
Contour line apex coordinate;
S3 classifies to profile, determines whether the object or model for needing to project;If NO, this object is deleted
Apex coordinate;
Projected image is exported in the region that S4 is linked to be on summit.
In an embodiment of the present invention, in the step S1, during preliminary differentiation foreground and background, it is set to gray-scale map FT
[i, j], T is set threshold value, meets equation below:
In an embodiment of the present invention, in the step S2, when being sampled to curve, if two summits of curve
X1With X2, their coordinate is X1=(x1,y1),X2=(x2,y2), now, in X1With X2Find point X in centre0=(x0,y0) position
Put, coordinate is obtained by calculating distance:
In formula, d is intermediate point X0Apart from starting point X1Distance, take a point again between each two point, repeat this step
Suddenly, obtain being fitted the polygonal profile of arbitrary objects profile.
In the step S2, the approximating method of shape contour is not unique, in an embodiment of the present invention, carries out polygon
When shape is fitted, with minimum segmented fitting curve;Curve is sampled, limited point is taken on curve, and these points are connected
Get up, obtain being fitted the polygon of arbitrary objects profile.
In another embodiment of the invention, when carrying out polygon approach, all it is fitted using broken line, broken line
Sampled, limited point is taken on broken line, and these points are linked up, obtain being fitted the polygon of arbitrary objects profile.
In the step S3, all applicable machine learning algorithms can be used by carrying out classification to profile, preferably,
In a kind of embodiment of the present invention, by comparing the length ratio of each object vertex line, realization is classified to profile.
In the step S4, the image format of output can not do specific restriction, preferably, in one kind of the present invention
In embodiment, during output projected image, the section sets outside region are black.
Compared to the content introduced in background technology, the present invention:
1. use the technology based on machine vision algorithm automatic detection and positioning projection's mapping area, instead front case hand
The step for dynamic paint projection region.
2. it can quickly correct the projected image deviation caused after projecting apparatus and model displacement.
3. simplifying the process of user installation projection mapping system, plug and play is realized.
Described above is only the preferred embodiment of the present invention, rather than its limitations;Although it should be pointed out that with reference to above-mentioned each
The present invention is described in detail embodiment, it will be understood by those within the art that, it still can be to above-mentioned each
Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part or all technical characteristic;And this
A little modifications and replacement, do not make the essence of corresponding technical scheme depart from the scope of various embodiments of the present invention technical scheme.
Claims (7)
1. the projection mapping region automatic identifying method based on machine vision technique, it is characterised in that comprise the following steps:
The coloured image of input is converted into gray-scale map by S1 first, and binary conversion treatment is carried out afterwards, the suitable threshold value by setting,
By image each pixel, with 0 or 1, the two numerical value are replaced, tentatively to distinguish foreground and background;
S2 uses the binary map that previous step is exported, and carries out polygon approach to the profile of shape, exports the wheel in units of object
The apex coordinate of profile;
S3 classifies to profile, determines whether the object or model for needing to project;If NO, the top of this object is deleted
Point coordinates;
Projected image is exported in the region that S4 is linked to be on summit.
2. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In, it is preliminary when distinguishing foreground and background in the step S1, gray-scale map FT [i, j] is set to, T is set threshold value, meets following public
Formula:
<mrow>
<msub>
<mi>F</mi>
<mi>T</mi>
</msub>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
<mi> </mi>
<mi>F</mi>
<mrow>
<mo>&lsqb;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mo>&le;</mo>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mi>o</mi>
<mi>t</mi>
<mi>h</mi>
<mi>e</mi>
<mi>r</mi>
<mi>w</mi>
<mi>i</mi>
<mi>s</mi>
<mi>e</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>.</mo>
</mrow>
3. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In in the step S2, when being sampled to curve, if two summit X of curve1With X2, their coordinate is X1=(x1,
y1),X2=(x2,y2), now, in X1With X2Find point X in centre0=(x0,y0) position, coordinate obtains by calculating distance:
<mrow>
<mi>d</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mo>|</mo>
<mi>det</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
<mrow>
<mo>|</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
In formula, d is intermediate point X0Apart from starting point X1Distance, take a point again between each two point, the step for repeating obtains
To the polygonal profile of fitting arbitrary objects profile.
4. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In in the step S2, when carrying out polygon approach, with minimum segmented fitting curve;Curve is sampled, on curve
Limited point is taken, and these points are linked up, obtains being fitted the polygon of arbitrary objects profile.
5. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In, in the step S2, carry out polygon approach when, be all fitted using broken line, broken line is sampled, on broken line
Limited point is taken, and these points are linked up, obtains being fitted the polygon of arbitrary objects profile.
6. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In in the step S3, by comparing the length ratio of each object vertex line, realization is classified to profile.
7. the projection mapping region automatic identifying method according to claim 1 based on machine vision technique, its feature exists
In in the step S4, during output projected image, the section sets outside region are black.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710415053.3A CN107240110A (en) | 2017-06-05 | 2017-06-05 | Projection mapping region automatic identifying method based on machine vision technique |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710415053.3A CN107240110A (en) | 2017-06-05 | 2017-06-05 | Projection mapping region automatic identifying method based on machine vision technique |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107240110A true CN107240110A (en) | 2017-10-10 |
Family
ID=59984981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710415053.3A Pending CN107240110A (en) | 2017-06-05 | 2017-06-05 | Projection mapping region automatic identifying method based on machine vision technique |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107240110A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110769225A (en) * | 2018-12-29 | 2020-02-07 | 成都极米科技股份有限公司 | Projection area obtaining method based on curtain and projection device |
CN110852138A (en) * | 2018-08-21 | 2020-02-28 | 北京图森未来科技有限公司 | Method and device for labeling object in image data |
CN113506314A (en) * | 2021-06-25 | 2021-10-15 | 北京精密机电控制设备研究所 | Automatic grabbing method and device for symmetrical quadrilateral workpiece under complex background |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010004417A2 (en) * | 2008-07-06 | 2010-01-14 | Sergei Startchik | Method for distributed and minimum-support point matching in two or more images of 3d scene taken with video or stereo camera. |
CN102542601A (en) * | 2010-12-10 | 2012-07-04 | 三星电子株式会社 | Equipment and method for modeling three-dimensional (3D) object |
US8905551B1 (en) * | 2010-12-23 | 2014-12-09 | Rawles Llc | Unpowered augmented reality projection accessory display device |
CN106373085A (en) * | 2016-09-20 | 2017-02-01 | 福州大学 | Intelligent terminal 3D watch try-on method and system based on augmented reality |
CN106600638A (en) * | 2016-11-09 | 2017-04-26 | 深圳奥比中光科技有限公司 | Realization method of augmented reality |
-
2017
- 2017-06-05 CN CN201710415053.3A patent/CN107240110A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010004417A2 (en) * | 2008-07-06 | 2010-01-14 | Sergei Startchik | Method for distributed and minimum-support point matching in two or more images of 3d scene taken with video or stereo camera. |
CN102542601A (en) * | 2010-12-10 | 2012-07-04 | 三星电子株式会社 | Equipment and method for modeling three-dimensional (3D) object |
US8905551B1 (en) * | 2010-12-23 | 2014-12-09 | Rawles Llc | Unpowered augmented reality projection accessory display device |
CN106373085A (en) * | 2016-09-20 | 2017-02-01 | 福州大学 | Intelligent terminal 3D watch try-on method and system based on augmented reality |
CN106600638A (en) * | 2016-11-09 | 2017-04-26 | 深圳奥比中光科技有限公司 | Realization method of augmented reality |
Non-Patent Citations (1)
Title |
---|
宓超 等: "《装卸机器视觉及其应用》", 31 January 2016, 上海科学技术出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110852138A (en) * | 2018-08-21 | 2020-02-28 | 北京图森未来科技有限公司 | Method and device for labeling object in image data |
CN110852138B (en) * | 2018-08-21 | 2021-06-01 | 北京图森智途科技有限公司 | Method and device for labeling object in image data |
CN110769225A (en) * | 2018-12-29 | 2020-02-07 | 成都极米科技股份有限公司 | Projection area obtaining method based on curtain and projection device |
CN113506314A (en) * | 2021-06-25 | 2021-10-15 | 北京精密机电控制设备研究所 | Automatic grabbing method and device for symmetrical quadrilateral workpiece under complex background |
CN113506314B (en) * | 2021-06-25 | 2024-04-09 | 北京精密机电控制设备研究所 | Automatic grabbing method and device for symmetrical quadrilateral workpieces under complex background |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104331168B (en) | Display adjusting method and electronic equipment | |
CN104598915B (en) | A kind of gesture identification method and device | |
JP6079832B2 (en) | Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method | |
US10430707B2 (en) | Information processing device | |
CN106548165A (en) | A kind of face identification method of the convolutional neural networks weighted based on image block | |
CN107357428A (en) | Man-machine interaction method and device based on gesture identification, system | |
CN111414780A (en) | Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium | |
KR101743763B1 (en) | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same | |
CN106598227A (en) | Hand gesture identification method based on Leap Motion and Kinect | |
CN111079625B (en) | Control method for automatically following rotation of camera along with face | |
CN107240110A (en) | Projection mapping region automatic identifying method based on machine vision technique | |
CN106570447B (en) | Based on the matched human face photo sunglasses automatic removal method of grey level histogram | |
CN109003224A (en) | Strain image generation method and device based on face | |
US20210035336A1 (en) | Augmented reality display method of simulated lip makeup | |
CN105069745A (en) | face-changing system based on common image sensor and enhanced augmented reality technology and method | |
CN103544478A (en) | All-dimensional face detection method and system | |
CN108885801A (en) | Information processing equipment, information processing method and program | |
WO2018042751A1 (en) | Gesture determining device, gesture operating device, and gesture determining method | |
CN103426000B (en) | A kind of static gesture Fingertip Detection | |
CN116935008A (en) | Display interaction method and device based on mixed reality | |
CN110442242B (en) | Intelligent mirror system based on binocular space gesture interaction and control method | |
CN115965950A (en) | Driver fatigue detection method based on multi-feature fusion state recognition network | |
CN106485765B (en) | A kind of method of automatic description face stick figure | |
CN207349152U (en) | Fan air-supply control device, fan air-supply control device | |
CN206363347U (en) | Based on Corner Detection and the medicine identifying system that matches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171010 |
|
RJ01 | Rejection of invention patent application after publication |