CN109859104A - A kind of video generates method, computer-readable medium and the converting system of picture - Google Patents
A kind of video generates method, computer-readable medium and the converting system of picture Download PDFInfo
- Publication number
- CN109859104A CN109859104A CN201910050480.5A CN201910050480A CN109859104A CN 109859104 A CN109859104 A CN 109859104A CN 201910050480 A CN201910050480 A CN 201910050480A CN 109859104 A CN109859104 A CN 109859104A
- Authority
- CN
- China
- Prior art keywords
- picture
- feature
- video
- frame
- adjacent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of methods that video generates picture to avoid picture splicing dislocation ghost image or splicing failure, this approach includes the following steps, step S1: by scene capture at video for splicing picture all in video;Step S2: the feature of each frame picture intermediate region in video is obtained;Step S3: two frame pictures are corrected according to identical feature in adjacent two frame and are aligned by the adjacent two frames picture of comparison;And step S4: output panorama sketch.The present invention also provides a kind of computer-readable mediums.The present invention also provides a kind of converting systems.
Description
[technical field]
The present invention relates to computer vision fields, provide method, the computer-readable medium of a kind of video generation picture
And converting system.
[background technique]
What existing video-splicing technology was all based on that the feature operators such as SIFT, SURF calculate the mapping of every frame picture singly answers square
Battle array answers matrix more, then answers matrix that every frame picture is mapped to same coordinate sky by the homography matrix being calculated or again more
Between, all pictures are stitched together to form a complete big figure.But because this method is shot in electronic equipments such as mobile phones
When video, image edge objective fuzzy is be easy to cause to distort, the effect for eventually causing splicing picture is undesirable, influences subsequent detection
Identify the operation of feature in picture.
[summary of the invention]
Of the existing technology to overcome the problems, such as, the present invention provides a kind of method that video generates picture, computer-readable
Medium and converting system.
The scheme that the present invention solves technical problem is to provide a kind of method that video generates picture, for splicing institute in video
Some pictures avoid picture splicing dislocation ghost image or splicing failure, and this approach includes the following steps, and step S1: scene is clapped
Take the photograph into video;Step S2: the feature of each frame picture intermediate region in video is obtained;Step S3: the adjacent two frames picture of comparison, root
Two frame pictures are corrected according to identical feature in adjacent two frame and are aligned;And step S4: output panorama sketch.
Preferably, step S2 includes step S21: reading video and intercepts video intermediate region;Step S22: detection is intermediate
The feature in region;And step S23: the feature detected is marked using high-level semantic.
Preferably, step S3 includes step S31: the fringe region of the adjacent two frames picture of processing;Step S32: identification is adjacent
The identical feature in two frame picture intermediate regions;And step S33: adjacent two frames picture intermediate region same characteristic features are subjected to overlapping and are reflected
It penetrates and is spliced into a picture.
Preferably, step S31 includes step S311: obtaining the central point of every frame picture;Step S312: identification adjacent two
Identical feature in frame image edge region, and calculate distance of this feature away from respective central point;And step S313: in two frame figures
Feature replacement of the selection by pericenter falls the feature far from central point in the same characteristic features of piece.
Preferably, further include before step S312, step S3121: establishing coordinate system by origin of central point;Step S3122:
Detect the feature in every frame image edge region.
Preferably, the feature includes shape, color and lines.
Preferably, the intermediate region is according to the region in capture apparatus focusing range.
Preferably, video is that the sequence along feature in scene location is successively shot, to ensure adjacent two frames picture and scene
Actual sequence of positions is consistent.
The present invention also provides a kind of computer-readable mediums, it is characterised in that: is stored in the computer-readable medium
Computer program, wherein the computer program is arranged to execute the method that above-mentioned video generates picture when operation.
The present invention also provides a kind of converting systems, it is characterised in that: the converting system includes shooting module, is used for field
Scape is shot into video;Preprocessing module, for obtaining the feature of each frame picture intermediate region in video;Processing module is used for
Adjacent two frames picture is compared, two frame pictures are corrected according to identical feature in adjacent two frame and are aligned;Output module, for exporting
Panorama sketch.
Compared with prior art, the method that video of the invention generates picture has the advantage that
1. being video by scene capture, alignment is corrected to every frame picture by the feature using intermediate region, it can be with
After avoiding the feature due to the fuzzy distortion of fringe region that adjacent two frames picture is caused to splice two-by-two, the panorama sketch of generation, which exists, to be missed
Difference.
2. carrying out correction alignment by the feature to fringe region, it is more nearly the fringe region of every frame picture in video
Real scene, so that splicing result is more fine, complete.
[Detailed description of the invention]
Fig. 1 is the flow diagram for the method that first embodiment of the invention video generates picture.
Fig. 2 is the flow diagram that first embodiment of the invention video generates step S2 in method Fig. 1 of picture.
Fig. 3 is the flow diagram that first embodiment of the invention video generates step S3 in method Fig. 1 of picture.
Fig. 4 is the flow diagram that first embodiment of the invention video generates step S31 in method Fig. 3 of picture.
Fig. 5 is the flow diagram that first embodiment of the invention video generates step S312 in method Fig. 4 of picture.
Fig. 6 is that the method that first embodiment of the invention video generates picture detects and feature in the adjacent two frames picture that marks
Schematic diagram.
Fig. 7 A is that the method that first embodiment of the invention video generates picture replaces adjacent two frames image edge region blur
The feature schematic diagram of distortion.
The method that Fig. 7 B first embodiment of the invention video generates picture splices the schematic diagram of adjacent two frames picture.
Fig. 8 is the module diagram of third embodiment of the invention converting system.
Fig. 9 is the module diagram of processing unit in third embodiment of the invention converting system.
Description of symbols: 1, converting system;11, shooting module;12, preprocessing module;13, processing module;14, it exports
Module;121, interception unit;122, detection unit;123, marking unit;131, processing unit;132, recognition unit;133, it spells
Order member;1311, mould group is captured;1312, mould group is modeled;1313, mould group is detected;1314, mould group is calculated;1315, mould group is replaced.
[specific embodiment]
In order to make the purpose of the present invention, technical solution and advantage are more clearly understood, below in conjunction with attached drawing and embodiment,
The present invention will be described in further detail.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention,
It is not intended to limit the present invention.
Referring to Fig. 1, first embodiment of the invention provides a kind of method that video generates picture, for splicing institute in video
Some pictures avoid picture splicing dislocation ghost image or splicing failure, and this approach includes the following steps,
Step S1: by scene capture at video;
Step S2: the feature of each frame picture intermediate region in video is obtained;
Step S3 compares adjacent two frames picture, corrects two frame pictures according to identical feature in adjacent two frame and is aligned;And
Step S4: output panorama sketch.
Firstly, by scene capture at video, when shooting due to capture apparatus focusing range, so in video only
Intermediate region is that clearly, and fringe region is it is possible that the case where fuzzy distortion, and then obtains each frame picture in video
Then the feature of intermediate region carries out correction alignment to the feature of image edge region blur distortion, that is, compares adjacent two frames figure
Two frame pictures are corrected according to identical feature in adjacent two frame and are aligned by piece, are finally one by the picture output after correction alignment
Zhang Quanjing's figure.
It is appreciated that scene is the unmanned shelf in unmanned supermarket, feature is the feature of commodity in unmanned shelf, feature
Shape, color and lines including commodity.Intermediate region is the region in focusing range, and fringe region is non-intermediate region;
Feature at picture blur distortion is the feature Fuzzy distortion that opposite picture is clearly located, and such as pick out a feature 90% is clear
, then picking out the 80% of this feature is fuzzy distortion.
It is appreciated that moving horizontally for imaging plane will cause edge since photographic subjects range Imaging plan range is small
Photographic subjects and the biggish angle change of imaging plane, make adjacent two frames picture splice when be easy splicing dislocation or completely spell
It does not connect, picture is spliced by obtaining video frame intermediate region, can effectively solve the problems, such as this.
Correction is aligned to adjacent two frames picture mapping projections to same plane, and identical feature is using overlapping mapping.Entirely
Scape figure is the picture that all pictures complete after correction alignment in video.
Video is that the sequence along feature in scene location is successively shot, to ensure adjacent two frames picture and the actual position of scene
Set sequence consensus.If successively having feature A, B, C, D, F in scene from left to right, when shooting this scene, with from the direction of A-F into
Row shooting, can also be shot from the direction of F-A.
Referring to Fig. 2, step S2 includes,
Step S21: it reads video and intercepts video intermediate region;
Step S22: the feature of intermediate region is detected;And
Step S23: the feature detected is marked using high-level semantic;
It is clearly, so intercepting the spy of video intermediate region after reading video since the video of shooting only has intermediate region
Sign, then detects the feature in each frame picture, and then the feature detected is marked using high-level semantic.
It is appreciated that the feature of detection is then marked using high-level semantic using the feature in low-level feature detection picture,
High-level semantic is to select optimal feature from detection feature, as used all spies in one bottle of Sprite of low-level feature abstract
Sign can not accurately pick out this bottle of Sprite wherein there is Partial Feature since the factors such as lighting angle are influenced from these features,
When the feature detected using high-level semantic label, the feature influenced by factors such as lighting angles is avoided, can most be picked out with selection
The feature of this bottle of Sprite, the feature of high-level semantic label and the corresponding relationship of this bottle of Sprite are unique.
Referring to Fig. 3, step S3 includes,
Step S31: the fringe region of the adjacent two frames picture of processing;
Step S32: the identical feature in the adjacent two frames picture intermediate region of identification;And
Step S33: adjacent two frames picture intermediate region same characteristic features are subjected to overlapping mapping and are spliced into a picture.
After the feature for marking each frame picture intermediate region, the fringe region of adjacent two frames picture is then handled;After
And according to the feature of adjacent two frames picture intermediate region, identify the identical feature of adjacent two frames picture intermediate region feature, then
Adjacent two frames picture intermediate region same characteristic features are subjected to overlapping mapping and are spliced into a picture.
It is appreciated that overlapping is mapped as making identical two Feature Mappings in adjacent two frames picture to make in identical position
Identical two features only show one.
In application scenes, first frame picture intermediate region detected multiple features, including black tea and green
Tea, the second frame picture intermediate region also detected multiple features, including green tea and Sprite, then handle this two frames picture
In fringe region, and then identify this adjacent identical feature in two frame picture intermediate regions, i.e., green tea is identical feature, after
And overlapping mapping is carried out to the feature of this identical green tea in two frames picture intermediate region, this two frames picture is spliced into a figure
Piece, then the feature of spliced picture intermediate region is black tea, green tea, Sprite.
As a kind of deformation, step S23 be can be omitted, and directly using the feature of detection, be identified among adjacent two frames picture
The identical feature in region.
Referring to Fig. 4, step S31 includes,
Step S311: the central point of every frame picture is obtained;
Step S312: identical feature in the adjacent two frames image edge region of identification, and this feature is calculated away from respective center
The distance of point;And
Step S313: feature replacement of the selection by pericenter falls far from central point in the same characteristic features of two frame pictures
Feature.
When handling the fringe region of adjacent two frames picture, the central point of every frame picture is first obtained, central point is every frame picture
Center, i.e., cornerwise intersection point.Then it identifies identical feature in adjacent two frames image edge region, and calculates this feature
Distance away from respective central point, finally selection is fallen far from by the feature replacement of pericenter in the same characteristic features of two frame pictures
The feature of heart point.
Referring to Fig. 5, further including step S3121 before step S312: establishing coordinate as origin using the central point of picture
System;Step S3122: the feature in every frame image edge region is detected.Identify identical spy in adjacent two frames image edge region
Before sign, coordinate system first is established using the two respective central points of frame picture as origin, then detects the spy in every frame image edge region
Sign, then identifies identical feature in adjacent two frames image edge region, and the position according to this feature in respective coordinate system
This feature is calculated to the distance of central point, the feature replacement for leaning on pericenter is finally selected in the same characteristic features of this two frames picture
Fall the feature far from central point.
In application scenes, in the coordinate system that first frame picture is established, fringe region has detected multiple features, wherein
Feature include laughable, black tea, in the coordinate system that the second frame picture is established, fringe region also has detected multiple features, therein
Feature includes black tea, Sprite, then identifies identical feature in this two frames picture, i.e. black tea is identical feature, is then calculated
The feature of black tea is to the distance of respective central point in this two frames picture, and the feature of black tea in first frame picture is calculated to first
The distance of frame center picture point is 2cm, and the distance of the feature of black tea to the second frame center picture point is 3cm in the second frame picture,
Finally feature replacement of the selection by pericenter falls the feature far from central point in the same characteristic features of this two frames picture, that is, selects
The feature replacement of black tea falls the feature of black tea in the second frame picture in first frame picture, and when replacement can replace the part in black tea
Feature can also replace feature whole in black tea.
It is appreciated that the distance of adjacent two frames picture identical feature to respective center picture is remoter, then it is assumed that at this
Feature is fuzzy distortion, on the contrary, then it is assumed that the feature at this is clearly, then using the fuzzy distortion of clearly feature replacement
Feature.
As a kind of deformation, the sequence of step S3121 and step S3122 can be replaced, i.e., first detect every frame image edge
Then the feature in region establishes coordinate system as origin using the central point of picture.
Referring to Fig. 6, step S312 is specifically, identify identical feature in adjacent two frames image edge region, and calculate
Distance of this feature away from respective central point.By taking the feature in the first frame picture of detection and the second frame picture as an example, video is shot
Direction be follow shot from left to right, have detected multiple features, the feature packet in first frame picture in this two frames picture
A, b, c, d, e, f are included, the position in the coordinate system of these features is followed successively by, a (- 3.5,1), b (- 3, -0.8), e (1.2, -
1.2), (3,1) f.Feature in second frame picture includes a, b, c, d, e, f, and the position in the coordinate system of these features is successively
For, a (- 4,1), b (- 3.5, -0.8), e (0.7, -1.2), f (2.5,1).Then identify that fringe region is identical in this two frames picture
Feature, i.e. feature a in the second frame picture is identical as the feature a in first frame picture, the feature b in the second frame picture and the
Feature b in one frame picture is identical, and the feature e in the second frame picture is identical as the feature e in first frame picture, the second frame picture
In feature f it is identical as the feature f in first frame picture.
Then distance of the identical feature away from respective central point in two frame pictures is calculated, it can be by feature in a coordinate system
Position directly calculate its distance away from center picture point, Pythagorean theorem can also be passed through according to the position of feature in a coordinate system
Calculate its distance away from center picture point.It is small that distance of the feature a away from first frame center picture point in first frame picture is calculated
Distance of the feature a away from the second frame center picture point in the second frame picture, feature b is away from first frame center picture in first frame picture
Distance of the distance less than feature b in the second frame picture away from the second frame center picture point of point, feature e is away from first in first frame picture
The distance of frame center picture point is greater than distance of the feature e away from the second frame center picture point in the second frame picture, in first frame picture
Distance of the feature f away from first frame center picture point is greater than distance of the feature f away from the second frame center picture point in the second frame picture.
Fig. 7 A is please referred to, step S313 in the same characteristic features of two frame pictures specifically, select the feature by pericenter
Replace the feature far from central point.The feature a in first frame picture is selected to replace the feature a in the second frame picture, selection
Feature b in first frame picture replaces the feature b in the second frame picture, and the feature e in the second frame picture is selected to replace first frame
Feature e in picture selects the feature f in the feature f replacement first frame picture in the second frame picture.
Please continue to refer to Fig. 6, step S32: the identical feature in the adjacent two frames picture intermediate region of identification.The first of detection
In frame picture and the second frame picture, the feature of first frame picture intermediate region is c, d, and the feature of the second frame picture intermediate region is
C, d identifies identical feature in this two frames picture, i.e. feature c in first frame picture and the feature c phase in the second frame picture
Together, the feature d in first frame picture is identical as the feature d in the second frame picture.
Fig. 7 B is please referred to, step S33 is specifically, carry out overlapping mapping simultaneously for adjacent two frames picture intermediate region same characteristic features
It is spliced into a picture.The feature a in first frame picture is carried out overlapping mapping with the feature a in the second frame picture,
Feature b in one frame picture carries out overlapping mapping with the feature b in the second frame picture, and first frame picture and the second frame picture
It is spliced into a picture, the feature after splicing in picture is a, b, c, d, e, f, and these features are clearly.
Step S4 is specifically, picture all in video is completed after splicing, by one Zhang Quan of spliced picture generation
Scape figure simultaneously exports.Due to the processing that the feature in panorama sketch is aligned through overcorrection, all features are clear in all panorama sketch
Clear.
After exporting panorama sketch, by identifying and storing the feature in panorama sketch, and then can directly it know from panorama sketch
Not Chu each feature what is specifically, the accuracy rate identified in clearly feature is higher.
As a kind of deformation, the identical feature in adjacent two frames picture intermediate region does not have to splice adjacent using overlapping mapping
Same characteristic features in primary adjacent two frames picture intermediate region are first removed before two frame pictures.
Second embodiment of the invention provides a kind of computer-readable medium, and computer journey is stored in computer-readable medium
Sequence, wherein computer program is arranged to execute the method that above-mentioned video generates picture when operation.
Referring to Fig. 8, third embodiment of the invention provides a kind of converting system 1, converting system 1 includes shooting module 11,
For by scene capture at video;Preprocessing module 12, for obtaining the feature of each frame picture intermediate region in video;Processing
Two frame pictures are corrected according to identical feature in adjacent two frame and are aligned for comparing adjacent two frames picture by module 13;Export mould
Block 14, for exporting panorama sketch.
Preprocessing module 12 includes interception unit 121, detection unit 122 and marking unit 123.Interception unit 121 is read
Video simultaneously intercepts video intermediate region, and detection unit 122 detects the feature of intermediate region, and marking unit 123 uses high-level semantic
The feature detected is marked.
Referring to Fig. 9, processing module 13 includes processing unit 131, recognition unit 132 and concatenation unit 133.Processing unit
The fringe region of the 131 adjacent two frames pictures of processing, recognition unit 132 identify the identical feature in adjacent two frames picture intermediate region,
Adjacent two frames picture intermediate region same characteristic features are carried out overlapping mapping and are spliced into a picture by concatenation unit 133.
Processing unit 131 include capture mould group 1311, modeling mould group 1312, detection mould group 1313, calculate mould group 1314 and
Replace mould group 1315.Capture mould group 1311 obtains the central point of every frame picture, and modeling mould group 1312 is established by origin of central point
Coordinate system, detection mould group 1313 detect the feature in every frame image edge region, calculate mould group 1314 and identify adjacent two frames picture side
Identical feature in edge region, and distance of this feature away from respective central point is calculated, phase of the replacement mould group 1315 in two frame pictures
Fall the feature far from central point with feature replacement of the selection by pericenter in feature.
In accordance with an embodiment of the present disclosure, it may be implemented as computer software journey above with reference to the process of flow chart description
Sequence.For example, embodiment of the disclosure includes a kind of computer program product comprising carry meter on a computer-readable medium
Calculation machine program, the computer program include the program code for method shown in execution flow chart.In such embodiments,
The computer program can be downloaded and installed from network by communications portion, and/or be mounted from detachable media.At this
When computer program is executed by central processing unit (CPU), the above-mentioned function of limiting in the present processes is executed.It needs to illustrate
, computer-readable medium described herein can be computer-readable signal media or computer readable storage medium
Either the two any combination.Computer readable storage medium for example may be-but not limited to-electricity, magnetic,
Optical, electromagnetic, the system of infrared ray or semiconductor, device or device, or any above combination.Computer-readable storage medium
The more specific example of matter can include but is not limited to: have the electrical connections of one or more conducting wires, portable computer diskette,
Hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory),
Optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any conjunction
Suitable combination.In this application, computer readable storage medium can be any tangible medium for including or store program, the journey
Sequence can be commanded execution system, device or device use or in connection.And in this application, it is computer-readable
Signal media may include in a base band or as carrier wave a part propagate data-signal, wherein carrying computer can
The program code of reading.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, optical signal or
Above-mentioned any appropriate combination.Computer-readable signal media can also be any other than computer readable storage medium
Computer-readable medium, the computer-readable medium can send, propagate or transmit for by instruction execution system, device or
Person's device uses or program in connection.The program code for including on computer-readable medium can be with any appropriate
Medium transmission, including but not limited to: wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Compared with prior art, the method that video of the invention generates picture has the advantage that
1. being video by scene capture, alignment is corrected to every frame picture by the feature using intermediate region, it can be with
After avoiding the feature due to the fuzzy distortion of fringe region that adjacent two frames picture is caused to splice two-by-two, the panorama sketch of generation, which exists, to be missed
Difference.
2. carrying out correction alignment by the feature to fringe region, it is more nearly the fringe region of every frame picture in video
Real scene, so that splicing result is more fine, complete.
The foregoing is merely present pre-ferred embodiments, are not intended to limit the invention, it is all principle of the present invention it
Any modification made by interior, equivalent replacement and improvement etc. should all be comprising within protection scope of the present invention.
Claims (10)
1. a kind of method that video generates picture, for splicing picture all in video, avoid picture splicing dislocation ghost image or
Person splices failure, it is characterised in that: and this approach includes the following steps,
Step S1: by scene capture at video;
Step S2: the feature of each frame picture intermediate region in video is obtained;
Step S3: two frame pictures are corrected according to identical feature in adjacent two frame and are aligned by the adjacent two frames picture of comparison;And
Step S4: output panorama sketch.
2. the method that video as described in claim 1 generates picture, it is characterised in that: step S2 includes,
Step S21: it reads video and intercepts video intermediate region;
Step S22: the feature of intermediate region is detected;And
Step S23: the feature detected is marked using high-level semantic.
3. the method that video as described in claim 1 generates picture, it is characterised in that: step S3 includes,
Step S31: the fringe region of the adjacent two frames picture of processing;
Step S32: the identical feature in the adjacent two frames picture intermediate region of identification;And
Step S33: adjacent two frames picture intermediate region same characteristic features are subjected to overlapping mapping and are spliced into a picture.
4. the method that video as claimed in claim 3 generates picture, it is characterised in that: step S31 includes,
Step S311: the central point of every frame picture is obtained;
Step S312: identical feature in the adjacent two frames image edge region of identification, and this feature is calculated away from respective central point
Distance;And
Step S313: feature replacement of the selection by pericenter falls the spy far from central point in the same characteristic features of two frame pictures
Sign.
5. the method that video as claimed in claim 4 generates picture, it is characterised in that: further include before step S312,
Step S3121: coordinate system is established by origin of central point;
Step S3122: the feature in every frame image edge region is detected.
6. the method that video as described in claim 1 generates picture, it is characterised in that: the feature include shape, color and
Lines.
7. the method that video as described in claim 1 generates picture, it is characterised in that: the intermediate region is to be set according to shooting
Region in standby focusing range.
8. the method that video as described in claim 1 generates picture, it is characterised in that: video is along feature in scene location
Sequence is successively shot, to ensure that adjacent two frames picture is consistent with the actual sequence of positions of scene.
9. a kind of computer-readable medium, it is characterised in that: it is stored with computer program in the computer-readable medium,
In, the computer program is arranged to perform claim when operation and video described in any one of 1-8 is required to generate picture
Method.
10. a kind of converting system, it is characterised in that: the converting system includes shooting module, is used for scene capture into video;
Preprocessing module, for obtaining the feature of each frame picture intermediate region in video;Processing module, for comparing adjacent two frames figure
Two frame pictures are corrected according to identical feature in adjacent two frame and are aligned by piece;Output module, for exporting panorama sketch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910050480.5A CN109859104B (en) | 2019-01-19 | 2019-01-19 | Method for generating picture by video, computer readable medium and conversion system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910050480.5A CN109859104B (en) | 2019-01-19 | 2019-01-19 | Method for generating picture by video, computer readable medium and conversion system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109859104A true CN109859104A (en) | 2019-06-07 |
CN109859104B CN109859104B (en) | 2020-04-17 |
Family
ID=66895283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910050480.5A Active CN109859104B (en) | 2019-01-19 | 2019-01-19 | Method for generating picture by video, computer readable medium and conversion system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109859104B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113393381A (en) * | 2021-07-08 | 2021-09-14 | 哈尔滨工业大学(深圳) | Pipeline inner wall image generation method and device and terminal equipment |
CN114155473A (en) * | 2021-12-09 | 2022-03-08 | 成都智元汇信息技术股份有限公司 | Picture cutting method based on frame compensation, electronic equipment and medium |
CN114615426A (en) * | 2022-02-17 | 2022-06-10 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN114750147A (en) * | 2022-03-10 | 2022-07-15 | 深圳甲壳虫智能有限公司 | Robot space pose determining method and device and robot |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303544A (en) * | 2015-10-30 | 2016-02-03 | 河海大学 | Video splicing method based on minimum boundary distance |
CN105869113A (en) * | 2016-03-25 | 2016-08-17 | 华为技术有限公司 | Panoramic image generation method and device |
US20160307350A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis - panorama |
CN106530267A (en) * | 2016-11-30 | 2017-03-22 | 长沙全度影像科技有限公司 | Fusion method for avoiding panoramic picture misalignment |
CN107580175A (en) * | 2017-07-26 | 2018-01-12 | 济南中维世纪科技有限公司 | A kind of method of single-lens panoramic mosaic |
CN109146833A (en) * | 2018-08-02 | 2019-01-04 | 广州市鑫广飞信息科技有限公司 | A kind of joining method of video image, device, terminal device and storage medium |
-
2019
- 2019-01-19 CN CN201910050480.5A patent/CN109859104B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160307350A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis - panorama |
CN105303544A (en) * | 2015-10-30 | 2016-02-03 | 河海大学 | Video splicing method based on minimum boundary distance |
CN105869113A (en) * | 2016-03-25 | 2016-08-17 | 华为技术有限公司 | Panoramic image generation method and device |
CN106530267A (en) * | 2016-11-30 | 2017-03-22 | 长沙全度影像科技有限公司 | Fusion method for avoiding panoramic picture misalignment |
CN107580175A (en) * | 2017-07-26 | 2018-01-12 | 济南中维世纪科技有限公司 | A kind of method of single-lens panoramic mosaic |
CN109146833A (en) * | 2018-08-02 | 2019-01-04 | 广州市鑫广飞信息科技有限公司 | A kind of joining method of video image, device, terminal device and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113393381A (en) * | 2021-07-08 | 2021-09-14 | 哈尔滨工业大学(深圳) | Pipeline inner wall image generation method and device and terminal equipment |
CN113393381B (en) * | 2021-07-08 | 2022-11-01 | 哈尔滨工业大学(深圳) | Pipeline inner wall image generation method and device and terminal equipment |
CN114155473A (en) * | 2021-12-09 | 2022-03-08 | 成都智元汇信息技术股份有限公司 | Picture cutting method based on frame compensation, electronic equipment and medium |
CN114615426A (en) * | 2022-02-17 | 2022-06-10 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN114750147A (en) * | 2022-03-10 | 2022-07-15 | 深圳甲壳虫智能有限公司 | Robot space pose determining method and device and robot |
CN114750147B (en) * | 2022-03-10 | 2023-11-24 | 深圳甲壳虫智能有限公司 | Space pose determining method and device of robot and robot |
Also Published As
Publication number | Publication date |
---|---|
CN109859104B (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109859104A (en) | A kind of video generates method, computer-readable medium and the converting system of picture | |
CN109003311B (en) | Calibration method of fisheye lens | |
US11503275B2 (en) | Camera calibration system, target, and process | |
CN110427917B (en) | Method and device for detecting key points | |
CN109167924B (en) | Video imaging method, system, device and storage medium based on hybrid camera | |
CN108805917B (en) | Method, medium, apparatus and computing device for spatial localization | |
CN105894499B (en) | A kind of space object three-dimensional information rapid detection method based on binocular vision | |
CN110232369B (en) | Face recognition method and electronic equipment | |
KR101121034B1 (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
US20190132584A1 (en) | Method and device for calibration | |
US11676257B2 (en) | Method and device for detecting defect of meal box, server, and storage medium | |
CN109934093A (en) | A kind of method, computer-readable medium and identifying system identifying commodity on shelf | |
CN107588855B (en) | Detection method, detection device and the terminal device of transformer equipment temperature | |
CN110956114A (en) | Face living body detection method, device, detection system and storage medium | |
CN110458142A (en) | A kind of face identification method and system merging 2D and 3D | |
CN111339887A (en) | Commodity identification method and intelligent container system | |
CN106997366B (en) | Database construction method, augmented reality fusion tracking method and terminal equipment | |
JP5931646B2 (en) | Image processing device | |
CN110770786A (en) | Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof | |
CN113763466A (en) | Loop detection method and device, electronic equipment and storage medium | |
CN115834860A (en) | Background blurring method, apparatus, device, storage medium, and program product | |
CN108734098A (en) | Human body image recognition methods and device | |
CN113240602A (en) | Image defogging method and device, computer readable medium and electronic equipment | |
CN110363818A (en) | The method for detecting abnormality and device of binocular vision system | |
CN111220360B (en) | Method and device for testing resolution of camera module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |