WO2019080257A1 - 电子装置、车祸现场全景图像展示方法和存储介质 - Google Patents

电子装置、车祸现场全景图像展示方法和存储介质

Info

Publication number
WO2019080257A1
WO2019080257A1 PCT/CN2017/113725 CN2017113725W WO2019080257A1 WO 2019080257 A1 WO2019080257 A1 WO 2019080257A1 CN 2017113725 W CN2017113725 W CN 2017113725W WO 2019080257 A1 WO2019080257 A1 WO 2019080257A1
Authority
WO
WIPO (PCT)
Prior art keywords
photo
photos
scene
matching
scenes
Prior art date
Application number
PCT/CN2017/113725
Other languages
English (en)
French (fr)
Inventor
王健宗
王义文
刘奡智
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019080257A1 publication Critical patent/WO2019080257A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an electronic device, a panoramic image display method for a car accident scene, and a storage medium.
  • the scheme of applying VR (Virtual Reality) technology to the auto insurance claims scene is usually to manually splicing the scene of the accident scene, that is, the photographer uses the expensive panoramic camera to shoot in the real scene, and after obtaining the image, the post-processing team performs Manually stitching and finally rendering with Photoshop. It is undeniable that the accuracy of this is higher, but one is too costly, and the other is too real-time to meet the real auto insurance business needs.
  • the main purpose of the present application is to provide an electronic device, a panoramic image display method for a car accident scene, and a storage medium, which aim to improve the splicing efficiency of a car accident scene photo of a car insurance claim scene and reduce the cost.
  • an electronic device proposed by the present application includes a memory, a processor, and a memory panoramic scene display system capable of running on the processor, where the vehicle accident scene panoramic image display system is The processor implements the following steps when it executes:
  • the selected scenes of the other accidents are separately grouped with the selected scenes of the accident scenes, and the preset type algorithm is used to calculate the corresponding photos of the scenes of the accidents in each of the groups.
  • the photos of the scenes of the respective accidents of the respective matching photos are spliced.
  • the present application also provides a method for displaying a panoramic image of a car accident scene, the method comprising the steps of:
  • the selected scenes of the other accidents are separately grouped with the selected scenes of the accidents, and the default type algorithm is used to calculate the homography matrix corresponding to the scenes of the accidents in each of the groups;
  • the photos of the scenes of the respective accidents of the respective matching photos are spliced.
  • the present application further provides a computer readable storage medium storing a car accident scene panoramic image display system, the car accident scene panoramic image display system being executable by at least one processor to enable the at least one The processor performs the following steps:
  • the photos of the scenes of the respective accidents of the respective matching photos are spliced.
  • the system after receiving a photo file package of a car accident scene of a car insurance claim scene, extracts preset feature points of each car accident scene photo, and determines a feature point set of each car accident scene photo; and then selects one by one The photo of the scene of the accident is processed.
  • the preset screening rules are used to screen out the photos of other accidents related to the selected scene of the accident, so as to find out the other accidents associated with the photos of each accident.
  • Photograph of the scene for each selected scene of the accident scene, the selected scenes of the accident scene are grouped one by one with the photos of other accident scenes associated with them, and the homography matrix corresponding to each group is calculated, and then corresponding to each group
  • the homography matrix calculates the photo matching confidence corresponding to each of the groupings, and the other car accident scene photos in the group with the highest confidence level and the selected car accident scene photo become a matching photo pair, so that the selected photo of the selected car accident scene is obtained.
  • the system After the system receives the photo file package of the accident scene of the accident, the system automatically completes the splicing processing of all the accident scene photos of the scene photo file package after the vehicle, and quickly obtains the panoramic image of the scene of the accident; In the prior art, the efficiency is greatly improved by manually splicing photos, ensuring the real-time processing of the car insurance claims business, and reducing the labor cost.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for displaying a panoramic image of a vehicle accident scene of the present application
  • FIG. 2 is a schematic flow chart of a second embodiment of a method for displaying a panoramic image of a vehicle accident scene of the present application
  • FIG. 3 is a schematic diagram of an operating environment of an embodiment of a panoramic image display system for a vehicle accident scene of the present application
  • FIG. 4 is a program block diagram of an embodiment of a panoramic image display system for a car accident scene of the present application
  • FIG. 5 is a program block diagram of an embodiment of a panoramic image display system for a car accident scene of the present application.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for displaying a panoramic image of a vehicle accident scene.
  • the method for displaying a panoramic image of the accident scene includes:
  • Step S10 after receiving a car accident scene photo file package of a car insurance claim scene, extract a preset type feature point of each car accident scene photo, and find a first preset number of each of the feature points to the nearest neighbor Point, each feature point is closest to the first preset number of adjacent points is a feature point set;
  • a car accident scene photo file package of a car insurance claim scene includes a complete set of photos of a car accident scene taken by a panoramic camera.
  • the system After receiving a photo file package of the accident scene, the system first extracts the preset type feature points (such as RootSIFT feature points) in all the accident scene photos, and then finds and identifies each feature point of each accident scene photo.
  • a first preset number for example, four
  • a feature point distance for example, a European distance
  • the feature point is formed as a feature point set by a neighbor point closest to the first preset number thereof, so as to determine A collection of all feature points for each photo of the accident scene.
  • the distance between all the feature points of each car accident scene photo (for example, Euclidean distance) can be calculated, thereby determining A first predetermined number of nearest feature points of each feature point.
  • Step S20 selecting a photo of the accident scene one by one, after selecting a photo of the scene of the accident, based on the set of feature points corresponding to the photos of all the accident scenes, and screening other accidents associated with the selected scene of the accident scene according to the predetermined screening rules.
  • the system selects the scenes of the accident scenes for processing; after selecting a scene of the accident scene, based on the feature points corresponding to the scene photos of each accident, and screening according to the predetermined screening rules
  • the photo of the other car accident scene associated with the selected car accident scene photo, the photo of the other car accident site is the photo of the remaining car accident scene except the photo of the selected car accident scene.
  • Step S30 grouping the selected scenes of the other accident scenes and the selected scenes of the accident scenes respectively, and using the preset type algorithm to respectively calculate the homography matrix corresponding to the scene photos of the accidents in each of the groupings;
  • the photos of the other accident scenes selected and the selected scenes of the accident are grouped separately. That is, each of the selected scenes of the other accidents is combined with the selected scene of the accident scene to form a group; and the homography matrix corresponding to the scene of the accident in each group is calculated by the preset type algorithm.
  • the preset type algorithm preferably adopts a RANSAC (random sample consensus) algorithm.
  • Step S40 Calculate a photo matching confidence level corresponding to each of the groupings based on the homography matrix corresponding to each of the groupings, and use the other car accident scene photos in the group with the highest confidence as the matching photos of the selected car accident scene photos.
  • the other car accident scene photos in the group with the highest confidence and the selected car accident scene photo are a matching photo pair;
  • the photo matching confidence corresponding to each group is calculated by: after calculating the homography matrix of one group, two photos in the group (ie, the photo of the selected accident scene is associated with one other
  • the overlap area of the scene of the accident can be found by the homography transformation.
  • the homography matrix is estimated by the RANSAC algorithm.
  • the RANSAC algorithm process can return a set of interior points (ie, matching points), and calculate the overlap of the interior points in the two photos.
  • the percentage in the area is the matching confidence of the two photos.
  • the other car accident scene photos in the group with the highest reliability are taken as the matching photos of the selected car accident scene photos, and the selected car accident scene photo and its matching photo are a matching photo pair;
  • Step S50 determining a shooting sequence and a splicing portion of each of the scene photos of each of the matching photo photos according to the feature point sets of the photos of the respective car accident scenes in each matching photo pair;
  • the system sets the feature points of the scene photos of the respective accidents according to the matching photos. Determine the splicing location and shooting sequence of each car accident scene photo in each matching photo pair. For example, the shooting order and the splicing position of each of the respective matching photo pairs can be determined by the homography matrix of each matching photo pair.
  • Step S60 according to the determined shooting sequence and the splicing part of the scene photos of the respective car accidents, splicing the scene photos of the respective car accidents in the matching photo pairs.
  • the photos of the scenes of each of the matching photos are spliced to obtain the scene of the accident. Panoramic picture.
  • the system after receiving the photo file package of the accident scene of the car insurance claim scene, extracts the preset type feature points of each car accident scene photo, and determines the feature point set of each car accident scene photo; Select the scene of the accident scene to be processed, according to the feature points of the photos of the scenes of each accident, use the preset screening rules to filter out the photos of other accidents associated with the selected scene of the accident, so as to find out the other photos associated with each scene of the accident.
  • Photograph of the scene of the accident for each selected scene of the accident, the photos of the selected scene of the accident are grouped one by one with the photos of other accidents associated with the accident, and the homography matrix corresponding to each group is calculated, and then corresponding to each group
  • the homography matrix calculates the photo matching confidence corresponding to each of the groups, and the other car accident scene photos in the group with the highest confidence and the selected car accident scene photo become a matching photo pair, thus obtaining the selected car accident scene photos.
  • the system After the system receives the photo file package of the accident scene of the accident, the system automatically completes the splicing processing of all the accident scene photos of the scene photo file package after the vehicle, and quickly obtains the panoramic image of the scene of the accident; In the prior art, the efficiency is greatly improved by manually splicing photos, ensuring the real-time processing of the car insurance claims business, and reducing the labor cost.
  • the preset type feature point of the embodiment is a RootSIFT feature point
  • the step of extracting the preset type feature point of the car accident scene photo for a car accident scene photo includes:
  • the direction parameter vector is represented by SIFT, and the SIFT is transformed into RootSIFT by using a preset calculation formula.
  • the RootSIFT is also a multi-dimensional vector.
  • the predetermined screening rule is:
  • Comparing the photos of each other accident scene with the selected scene of the accident scene Comparing the photos of each other accident scene with the selected scene of the accident scene; comparing the photos of each other accident scene with the selected scene of the accident scene: collecting the feature points in the scene of the other accident scene The feature point sets of the selected car accident scene photos are compared, and the same feature point set (ie, the matching feature point set) corresponding to the feature point set of the selected car accident scene photo is found, and the matching feature point sets of each other car accident scene photos are respectively counted. quantity.
  • the second preset number is a threshold preset by the system as a method for determining whether the photo of the other accident scene is associated with the selected scene of the accident scene, for example, the second preset number is four; when a matching scene point corresponding to the scene of the other accident If the number of the collection is greater than the second predetermined number, the system determines that the photo of the other accident scene does not meet the associated photo requirement of the selected scene of the accident, and determines that the scene of the other accident is the associated photograph of the selected scene of the accident.
  • the system determines that the photo of the other car accident scene does not meet the associated photo requirement of the selected car accident scene photo, and the other car accident scene The photo is determined to be unrelated to the selected car accident scene photo.
  • the foregoing solution is only a predetermined predetermined screening rule in the embodiment; in other embodiments, other screening rules may also be adopted, for example, determining other scenes of the accident scene corresponding to the previous preset name of the matching feature set number. Related photos for the selected car accident scene photos, and so on.
  • FIG. 2 is a schematic flowchart of an embodiment of a method for displaying a panoramic image of a vehicle accident scene.
  • the embodiment is based on an embodiment.
  • the step S60 includes:
  • Step S61 calculating a preset type picture adjustment parameter by using a first preset algorithm for each of the scene photos of each of the matching photo pairs;
  • the first preset algorithm may be a beam adjustment algorithm or other similar algorithm. Determining, by the first preset algorithm, a preset type picture adjustment parameter of each car accident scene photo in each matching pair; the preset type picture adjustment parameter includes a preset type rotation matrix (for example, rotation of three Euler angles) Matrix) and camera focal length.
  • Step S62 adjusting, by using a second preset algorithm, image chromatic aberration of each car accident scene photo in each matching photo pair;
  • the second preset algorithm may be a multi-band blending method
  • the multi-band fusion method includes: first finding an overlap region, and then constructing an image Laplacian pyramid, the pyramid is downsampled generation
  • the downsampling refers to: a picture with many pixels, equally spaced samples, to generate a new picture.
  • the multi-band fusion method is not only the operation of the image itself, but also the operation of the image in the pyramid of the image. Finally, the images in these pyramids are expanded and superimposed to generate a merged image.
  • Step S63 the photos of the scenes of the respective accidents in the matching photo pairs after the color difference adjustment are spliced according to the corresponding shooting order, the splicing part and the picture adjustment parameter.
  • Photographs of all accidents after the color difference adjustment are based on the sequence of photographs of the scenes of each accident, the splicing parts of the photos of the scenes of the accidents, and the picture adjustment parameters corresponding to the photos of the scenes of the accidents.
  • the photos of the scenes of each accident are spliced, and the scene of the accident is obtained after splicing.
  • Panoramic image is based on the sequence of photographs of the scenes of each accident, the splicing parts of the photos of the scenes of the accidents, and the picture adjustment parameters corresponding to the photos of the scenes of the accidents.
  • the present application also proposes a panoramic image display system for a car accident scene.
  • FIG. 3 is a schematic diagram of an operating environment of a preferred embodiment of the vehicle accident scene panoramic image display system 10 of the present application.
  • the car accident scene panoramic image display system 10 is installed and operated in the electronic device 1.
  • the electronic device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a server.
  • the electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • Figure 3 shows only the electronic device 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a hard disk or memory of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital (SD). Card, flash card, etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 is used to store application software and various types of data installed in the electronic device 1, such as program codes of the car accident scene panoramic image display system 10.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a car crash scene panorama Image display system 10 and the like.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a car crash scene panorama Image display system 10 and the like.
  • the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like in some embodiments.
  • the display 13 is for displaying information processed in the electronic device 1 and a user interface for displaying visualization, such as a business customization interface or the like.
  • the components 11-13 of the electronic device 1 communicate with one another via a system bus.
  • FIG. 4 is a program module diagram of an embodiment of the vehicle accident scene panoramic image display system 10 of the present application.
  • the car accident scene panoramic image display system 10 can be divided into one or more modules, one or more modules being stored in the memory 11 and being processed by one or more processors (this embodiment is a processor) 12) Executed to complete the application.
  • the car accident scene panoramic image display system 10 can be divided into an extraction module 101, a screening module 102, a first calculation module 103, a second calculation module 104, a determination module 105, and a splicing module 106.
  • module refers to a series of computer program instruction segments capable of performing a specific function, and is more suitable than the program for describing the execution process of the panoramic image display system 10 in the electronic device 1.
  • the extracting module 101 is configured to: after receiving a car accident scene photo file package of a car insurance claim scene, extract a preset type feature point of each car accident scene photo, and find a first preset number distance of each of the feature points The nearest neighbor point, where each feature point is closest to the first preset number of points is a feature point set;
  • a car accident scene photo file package of a car insurance claim scene includes a complete set of photos of a car accident scene taken by a panoramic camera.
  • the system After receiving a photo file package of the accident scene, the system first extracts the preset type feature points (such as RootSIFT feature points) in all the accident scene photos, and then finds and identifies each feature point of each accident scene photo.
  • a first preset number for example, four
  • a feature point distance for example, a European distance
  • the feature point is formed as a feature point set by a neighbor point closest to the first preset number thereof, so as to determine A collection of all feature points for each photo of the accident scene.
  • the distance between all the feature points of each car accident scene photo (for example, Euclidean distance) can be calculated, thereby determining The first preset number of distances for each feature point From the nearest neighbor.
  • the screening module 102 is configured to select a scene of the accident scene one by one, and after selecting a photo of the scene of the accident, select a set of feature points corresponding to the photos of all the accident scenes, and select and associate with the selected scene of the accident scene according to the predetermined screening rule. Photo of other car accident scenes;
  • the system selects the scenes of the accident scenes for processing; after selecting a scene of the accident scene, based on the feature points corresponding to the scene photos of each accident, and screening according to the predetermined screening rules
  • the photo of the other car accident scene associated with the selected car accident scene photo, the photo of the other car accident site is the photo of the remaining car accident scene except the photo of the selected car accident scene.
  • the first calculation module 103 is configured to group the selected scenes of the other accidents with the selected scenes of the accident, and use the preset type algorithm to separately calculate the corresponding photos of the scenes of the accidents in each of the groups. matrix;
  • the photos of the other accident scenes selected and the selected scenes of the accident are separately grouped, that is, each of the other accident scene photos selected. Both of them are combined with the selected scene of the accident scene to form a group; and the homography matrix corresponding to the scene of the accident in each group is calculated by a preset type algorithm.
  • the preset type algorithm preferably adopts a RANSAC (random sample consensus) algorithm.
  • the second calculation module 104 is configured to calculate a photo matching confidence level corresponding to each of the groups based on the homography matrix corresponding to each of the groups, and use other car accident scene photos in the group with the highest confidence as the selected car accident scene. a matching photo of the photo, the other car accident scene photo in the most trusted group and the selected car accident scene photo as a matching photo pair;
  • the photo matching confidence corresponding to each group is calculated by: after calculating the homography matrix of one group, two photos in the group (ie, the photo of the selected accident scene is associated with one other
  • the overlap area of the scene of the accident can be found by the homography transformation.
  • the homography matrix is estimated by the RANSAC algorithm.
  • the RANSAC algorithm process can return a set of interior points (ie, matching points), and calculate the overlap of the interior points in the two photos.
  • the percentage in the area is the matching confidence of the two photos.
  • the other car accident scene photos in the group with the highest reliability are taken as the matching photos of the selected car accident scene photos, and the selected car accident scene photo and its matching photo are a matching photo pair;
  • a determining module 105 configured to determine, according to a feature point set of each car accident scene photo of each matching photo pair, a shooting sequence and a splicing part of each of the photo photos of each of the matching photo centers;
  • the system After processing by the screening module 102, the first calculation module 104, and the second calculation module 105, finding matching photos of all the accident scene photos (ie, matching photo pairs of all the accident scene photos), the system is centered according to each matching photo.
  • the feature points of the photos of the scenes of each accident are determined, and the splicing parts and shooting sequences of the photos of the scenes of each of the matching photos are determined.
  • the shooting order and the splicing position of each of the respective matching photo pairs can be determined by the homography matrix of each matching photo pair.
  • the splicing module 106 is configured to splicing the scenes of the respective car accident scenes in each matching photo pair according to the determined shooting sequence and the splicing portion of the scene photos of the respective car accidents.
  • the photos of the scenes of each of the matching photos are spliced to obtain the scene of the accident. Panoramic picture.
  • the system after receiving the photo file package of the accident scene of the car insurance claim scene, extracts the preset type feature points of each car accident scene photo, and determines the feature point set of each car accident scene photo; Select the scene of the accident scene to be processed, according to the feature points of the photos of the scenes of each accident, use the preset screening rules to filter out the photos of other accidents associated with the selected scene of the accident, so as to find out the other photos associated with each scene of the accident.
  • Photograph of the scene of the accident for each selected scene of the accident, the photos of the selected scene of the accident are grouped one by one with the photos of other accidents associated with the accident, and the homography matrix corresponding to each group is calculated, and then corresponding to each group
  • the homography matrix calculates the photo matching confidence corresponding to each of the groups, and the other car accident scene photos in the group with the highest confidence and the selected car accident scene photo become a matching photo pair, thus obtaining the selected car accident scene photos.
  • the system After the system receives the photo file package of the accident scene of the accident, the system automatically completes the splicing processing of all the accident scene photos of the scene photo file package after the vehicle, and quickly obtains the panoramic image of the scene of the accident; In the prior art, the efficiency is greatly improved by manually splicing photos, ensuring the real-time processing of the car insurance claims business, and reducing the labor cost.
  • the preset type feature point of the embodiment is a RootSIFT feature point.
  • the preset type feature point of the car accident scene photo is extracted as follows:
  • the direction parameter vector is represented by SIFT, and the SIFT is transformed into RootSIFT by using a preset calculation formula.
  • the RootSIFT is also a multi-dimensional vector.
  • the predetermined screening rule is:
  • the feature point set of the photo is the same number of matching feature point sets
  • Comparing the photos of each other accident scene with the selected scene of the accident scene Comparing the photos of each other accident scene with the selected scene of the accident scene; comparing the photos of each other accident scene with the selected scene of the accident scene: collecting the feature points in the scene of the other accident scene The feature point sets of the selected car accident scene photos are compared, and the same feature point set (ie, the matching feature point set) corresponding to the feature point set of the selected car accident scene photo is found, and the matching feature point sets of each other car accident scene photos are respectively counted. quantity.
  • the second preset number is a threshold preset by the system as a method for determining whether the photo of the other accident scene is associated with the selected scene of the accident scene, for example, the second preset number is four; when a matching scene point corresponding to the scene of the other accident If the number of the collection is greater than the second predetermined number, the system determines that the photo of the other accident scene does not meet the associated photo requirement of the selected scene of the accident, and determines that the scene of the other accident is the associated photograph of the selected scene of the accident.
  • the system determines that the photo of the other car accident scene does not meet the associated photo requirement of the selected car accident scene photo, and the other car accident scene The photo is determined to be unrelated to the selected car accident scene photo.
  • the foregoing solution is only a predetermined predetermined screening rule in the embodiment; in other embodiments, other screening rules may also be adopted, for example, determining other scenes of the accident scene corresponding to the previous preset name of the matching feature set number. Related photos for the selected car accident scene photos, and so on.
  • FIG. 5 is a program module diagram of a second embodiment of a panoramic image display system for a car accident scene of the present application.
  • the splicing module 106 includes:
  • the parameter determining sub-module 1061 is configured to calculate a preset type picture adjustment parameter by using a first preset algorithm for each of the scene photos of each of the matching photo pairs;
  • the first preset algorithm may be a beam adjustment algorithm or other similar algorithm. Determining, by the first preset algorithm, a preset type picture adjustment parameter of each car accident scene photo in each matching pair; the preset type picture adjustment parameter includes a preset type rotation matrix (for example, rotation of three Euler angles) Matrix) and camera focal length.
  • the adjusting sub-module 1062 is configured to adjust, by using a second preset algorithm, image chromatic aberration of each car accident scene photo in each matching photo pair;
  • the second preset algorithm may be a multi-band blending method
  • the multi-band fusion method includes: first finding an overlap region, and then constructing an image Laplacian pyramid, the pyramid is downsampled generation
  • the downsampling refers to: a picture with many pixels, equally spaced samples, to generate a new picture.
  • the multi-band fusion method is not only the operation of the image itself, but also the operation of the image pyramid. Finally, the maps in these pyramids are expanded and superimposed to create a merged image.
  • the splicing sub-module 1063 is configured to splicing the scenes of the respective car accidents in the matching photo pairs after the chromatic aberration adjustment according to the corresponding shooting sequence, the splicing portion and the picture adjustment parameters.
  • Photographs of all accidents after the color difference adjustment are based on the sequence of photographs of the scenes of each accident, the splicing parts of the photos of the scenes of the accidents, and the picture adjustment parameters corresponding to the photos of the scenes of the accidents.
  • the photos of the scenes of each accident are spliced, and the scene of the accident is obtained after splicing.
  • Panoramic image is based on the sequence of photographs of the scenes of each accident, the splicing parts of the photos of the scenes of the accidents, and the picture adjustment parameters corresponding to the photos of the scenes of the accidents.
  • the present application further provides a computer readable storage medium storing a car accident scene panoramic image display system, the car accident scene panoramic image display system being executable by at least one processor to The at least one processor executes the car accident scene panoramic image display method in any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开一种电子装置、车祸现场全景图像展示方法和存储介质,该方法包括:在收到车祸现场照片文件包后,提取每张照片的预设类型特征点,找出每个特征点的预设数量距离最近的邻近点,形成特征点集合;逐一选择照片,筛选出与该选择的照片关联的其他照片;将筛选出的其他照片分别与该选择的照片进行两两分组,算出各个分组的单应矩阵;再计算出各个分组对应的照片匹配置信度,置信度最高的分组中的其他照片与该选择的照片为一个匹配照片对;根据各个照片的特征点集合,确定各个匹配照片对中的照片的拍摄顺序和拼接部位,对各个匹配照片对中的照片进行拼接。本申请技术方案提升了车祸现场照片的拼接效率,并降低了成本。

Description

电子装置、车祸现场全景图像展示方法和存储介质
本申请要求于2017年10月27日提交中国专利局、申请号为201711025267.6、发明名称为“电子装置、车祸现场全景图像展示方法和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及图片处理技术领域,特别涉及一种电子装置、车祸现场全景图像展示方法和存储介质。
背景技术
目前,对于车祸事件的理赔通常是基于事故现场照片的基础上做出的,事故现场照片是不可或缺的证据。而获取事故现场照片的现有方式是:由用户或者车险理赔人员拍摄事故场景照片,同时拍摄几张事故车辆周边的照片。这种事故现场照片作为证据,清晰而有说服力,不可替代。
然而,在实际车险理赔业务处理过程中,出现了许多有争议的车险理赔的情形,这种争议通常源于车祸场景描述不够直观,结合事故照片无法客观、准确的还原真实的场景。近年来随着VR(虚拟现实)技术的飞速发展,为重建车祸场景提供了重要技术支持。
目前,将VR(虚拟现实)技术运用于车险理赔场景的方案通常是对事故现场照片进行手动拼接,即:拍摄人员用价格高昂的全景相机在实景进行拍摄,获取了图像以后,后期处理团队进行手动拼接,并最后用Photoshop渲染。不可否认,这样做的精确性较高,但一则成本太高,二则实时性太差,不能达到真实的车险业务需求。
发明内容
本申请的主要目的是提供一种电子装置、车祸现场全景图像展示方法及存储介质,旨在提升车险理赔场景的车祸现场照片的拼接效率,并降低成本。
为实现上述目的,本申请提出的电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的车祸现场全景图像展示系统,所述车祸现场全景图像展示系统被所述处理器执行时实现如下步骤:
在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单 应矩阵;
基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
本申请还提出一种车祸现场全景图像展示方法,该方法包括步骤:
在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
本申请还提出一种计算机可读存储介质,所述计算机可读存储介质存储有车祸现场全景图像展示系统,所述车祸现场全景图像展示系统可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:
在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分 组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
本申请技术方案,系统在接收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并确定每张车祸现场照片的特征点集合;再逐一的选择车祸现场照片进行处理,根据各个车祸现场照片的特征点集合,采用预设的筛选规则筛选出该选择的车祸现场照片关联的其他车祸现场照片,如此找出每一张车祸现场照片关联的其他车祸现场照片;针对每一张选择的车祸现场照片,将该选择的车祸现场照片逐一的与其关联的其他车祸现场照片进行两两分组,计算出各个分组对应的单应矩阵,再根据各个分组对应的单应矩阵计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片成为一个匹配照片对,如此得到各个选择的车祸现场照片对应的匹配照片对;接着根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位,依据确定的各个车祸现场照片的拍摄顺序和拼接部位对各个车祸现场照片进行拼接,如此即可得到车祸现场的全景图像进行展示。本方案通过系统在接收到险理赔场景的车祸现场照片文件包后,全自动的完成对车后现场照片文件包的所有车祸现场照片的拼接处理,并快速得出车祸现场的全景图像;相较于现有技术通过人工拼接照片而言,效率大幅提升,保证了对车险理赔业务处理的实时性,并且降低了人力成本。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1为本申请车祸现场全景图像展示方法一实施例的流程示意图;
图2为本申请车祸现场全景图像展示方法二实施例的流程示意图;
图3为本申请车祸现场全景图像展示系统一实施例的运行环境示意图;
图4为本申请车祸现场全景图像展示系统一实施例的程序模块图;
图5为本申请车祸现场全景图像展示系统一实施例的程序模块图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
以下结合附图对本申请的原理和特征进行描述,所举实例只用于解释本申请,并非用于限定本申请的范围。
如图1所示,图1为本申请车祸现场全景图像展示方法一实施例的流程示意图。
本实施例中,该车祸现场全景图像展示方法包括:
步骤S10,在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
本实施例中,一个车险理赔场景的车祸现场照片文件包包括了通过全景相机拍摄的车祸现场的一整套照片。系统在接收到一个车祸现场照片文件包后,首先提取所有车祸现场照片中的预设类型特征点(例如RootSIFT特征点),然后针对每一张车祸现场照片的每一个特征点,找出与该特征点的距离(例如欧式距离)最近的第一预设数量(例如4个)的邻近点,该特征点与其第一预设数量距离最近的邻近点组成为一个特征点集合,如此,以确定出每一张车祸现场照片的所有特征点集合。本实施例中,在提取了每一张车祸现场照片的预设类型特征点后,可以计算出每一张车祸现场照片的所有特征点两两之间的距离(例如欧氏距离),从而确定每一个特征点的第一预设数量的距离最近的邻近点。
步骤S20,逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
在得到所有车祸现场照片的特征点集合后,系统逐一选择车祸现场照片进行处理;在选择一个车祸现场照片后,基于各个车祸现场照片对应的特征点集合,并依据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片,该其他车祸现场照片即除去被选择的车祸现场照片外的剩余车祸现场照片。
步骤S30,将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
在筛选得到与该选择的车祸现场照片关联的其他车祸现场照片后,将筛选出的各个其他车祸现场照片分别与该选择的车祸现场照片进行两两分组, 即筛选出的每张其他车祸现场照片都与该选择的车祸现场照片进行一次组合形成一个分组;再通过预设类型算法分别计算出各个分组中的车祸现场照片对应的单应矩阵。本实施例中,该预设类型算法优选采用RANSAC(random sample consensus,随机抽样一致)算法。
步骤S40,基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
本实施例中,各个分组对应的照片匹配置信度的通过以下方式计算:一个分组的单应矩阵计算出来之后,该分组中的两张照片(即该选择的车祸现场照片与其关联的一张其他车祸现场照片)的重叠区域可以通过单应变换找出来,单应矩阵用RANSAC算法来估算,RANSAC算法过程可以返回一组内点(即匹配点),计算出该内点在两张照片的重叠区域中所占百分比即为该两张照片的匹配置信度。在得到各个分组的照片匹配置信度之后,取置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,该选择的车祸现场照片与其匹配照片为一个匹配照片对;本实施例中置信度最高的分组可能只有一个,也可能有多个,即多个分组的照片匹配置信度相同。
步骤S50,根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
在经过步骤S20、S30和S40的处理,找出所有车祸现场照片的匹配照片(即得到所有车祸现场照片的匹配照片对)后,系统根据各个匹配照片对中各个车祸现场照片的特征点集合,确定出每一个匹配照片对中的各个车祸现场照片的拼接部位及拍摄顺序。例如,可以通过各个匹配照片对的单应矩阵确定各个匹配照片对中各个照片的拍摄顺序和拼接部位。
步骤S60,根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
确定出各个匹配照片对的各个车祸现场照片的拍摄顺序和拼接部位后,依据各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接,从而得到车祸现场的全景画面。
本实施例技术方案,系统在接收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并确定每张车祸现场照片的特征点集合;再逐一的选择车祸现场照片进行处理,根据各个车祸现场照片的特征点集合,采用预设的筛选规则筛选出该选择的车祸现场照片关联的其他车祸现场照片,如此找出每一张车祸现场照片关联的其他车祸现场照片;针对每一张选择的车祸现场照片,将该选择的车祸现场照片逐一的与其关联的其他车祸现场照片进行两两分组,计算出各个分组对应的单应矩阵,再根据各个分组对应的单应矩阵计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片成为一个匹配照片对,如此得到各个选择的车祸现场照片对应的匹配照片对; 接着根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位,依据确定的各个车祸现场照片的拍摄顺序和拼接部位对各个车祸现场照片进行拼接,如此即可得到车祸现场的全景图像进行展示。本方案通过系统在接收到险理赔场景的车祸现场照片文件包后,全自动的完成对车后现场照片文件包的所有车祸现场照片的拼接处理,并快速得出车祸现场的全景图像;相较于现有技术通过人工拼接照片而言,效率大幅提升,保证了对车险理赔业务处理的实时性,并且降低了人力成本。
优选地,本实施例的所述预设类型特征点为RootSIFT特征点,针对一个车祸现场照片,提取该车祸现场照片的预设类型特征点的步骤包括:
(1)通过高斯滤波以及高斯差分构建该车祸现场照片的尺度空间,该尺度空间即DoG(Difference of Gaussian)空间;
(2)检测所述尺度空间的极值点,检测到的极值点成为关键点,也是潜在的特征点;
(3)拟合一个三维的二次函数来确定关键点的位置和尺度;
(4)利用关键点领域像素的梯度方向分布特性,为每个关键点指定方向参数,生成一个多维(例如128维)的方向参数向量,并生成描述算子;
(5)用SIFT表示该方向参数向量,并采用预设的计算公式将SIFT变换成RootSIFT,该RootSIFT也是一个多维的向量,该预设的计算公式为:RootSIFT=sqrt(SIFT/sum(SIFT),sqrt代表平方根函数。
进一步地,本实施例中,所述预先确定的筛选规则为:
分别确定各个其他车祸现场照片的特征点集合中,与该选择的车祸现场照片的特征点集合相同的匹配特征点集合的数量;
将各个其他车祸现场照片分别与该选择的车祸现场照片进行比对;每张其他车祸现场照片与该选择的车祸现场照片的比对为:将该其他车祸现场照片中的特征点集合去与该选择的车祸现场照片的特征点集合进行比较,找出与该选择的车祸现场照片的特征点集合相同的特征点集合(即匹配特征点集合),分别统计各个其他车祸现场照片的匹配特征点集合的数量。
若一个其他车祸现场照片对应的匹配特征点集合的数量大于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片相关联;
其他车祸现场照片的匹配特征点集合的数量越多,则说明该其他车祸现场照片与该选择的车祸现场照片存在的相同区域越多。第二预设数量为系统预设的作为判定其他车祸现场照片是否与该选择的车祸现场照片关联的阈值,例如,第二预设数量为4个;当一个其他车祸现场照片对应的匹配特征点集合的数量大于该第二预设数量,系统则判定该其他车祸现场照片不符合该选择的车祸现场照片的关联照片要求,确定该其他车祸现场照片为该选择的车祸现场照片的关联照片。
若一个其他车祸现场照片对应的匹配特征点集合的数量小于或者等于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片不相关联。
当一个其他车祸现场照片对应的匹配特征集合的数量小于或等于该第二预设数量时,系统则判定该其他车祸现场照片不符合该选择的车祸现场照片的关联照片要求,将该其他车祸现场照片确定为与该选择的车祸现场照片不关联。
当然,上述方案仅仅是本实施例优选的预先确定的筛选规则;在其它实施例中,还可采取其它的筛选规则,例如,将匹配特征集合数量的前预设名对应的其他车祸现场照片判定为该选择的车祸现场照片的关联照片,等等。
如图2所示,图2为本申请车祸现场全景图像展示方法一实施例的流程示意图。本实施例基于一实施例,在本实施例车祸现场全景图像展示方法中,所述步骤S60包括:
步骤S61,对各个匹配照片对中的各个车祸现场照片通过第一预设算法计算出预设类型图片调整参数;
所述第一预设算法可以为光束平差算法或其它相似算法。通过第一预设算法分别计算出各个匹配对中的各个车祸现场照片的预设类型图片调整参数;所述预设类型图片调整参数包括预设类型旋转矩阵(例如,三个欧拉角的旋转矩阵)和相机焦距。
步骤S62,通过第二预设算法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
所述第二预设算法可以为多带融合法(Multi-band blending),所述多带融合法包括:首先找出重叠区域,然后构建图像拉普拉斯金字塔,所述金字塔是降采样生成的,降采样指的是:一张图有很多像素,等间距采样,生成新的图。多带融合法就是不仅仅对图像本身操作,还要对图像金字塔里的图操作,最后把这些金字塔里的图,通过扩展和叠加两种操作,生成一张融合好的图片。
步骤S63,将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
将色差调整后的所有车祸现场照片根据各个车祸现场照片的拍摄顺序、各个车祸现场照片的拼接部位以及各个车祸现场照片对应的图片调整参数,将各个车祸现场照片进行拼接,拼接后则得到车祸现场的全景图像。
此外,本申请还提出一种车祸现场全景图像展示系统。
请参阅图3,是本申请车祸现场全景图像展示系统10较佳实施例的运行环境示意图。
在本实施例中,车祸现场全景图像展示系统10安装并运行于电子装置1中。电子装置1可以是桌上型计算机、笔记本、掌上电脑及服务器等计算设备。该电子装置1可包括,但不仅限于,存储器11、处理器12及显示器13。 图3仅示出了具有组件11-13的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
存储器11在一些实施例中可以是电子装置1的内部存储单元,例如该电子装置1的硬盘或内存。存储器11在另一些实施例中也可以是电子装置1的外部存储设备,例如电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括电子装置1的内部存储单元也包括外部存储设备。存储器11用于存储安装于电子装置1的应用软件及各类数据,例如车祸现场全景图像展示系统10的程序代码等。存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行车祸现场全景图像展示系统10等。
显示器13在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器13用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面,例如业务定制界面等。电子装置1的部件11-13通过系统总线相互通信。
请参阅图4,是本申请车祸现场全景图像展示系统10一实施例的程序模块图。在本实施例中,车祸现场全景图像展示系统10可以被分割成一个或多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行,以完成本申请。例如,在图4中,车祸现场全景图像展示系统10可以被分割成提取模块101、筛选模块102、第一计算模块103、第二计算模块104、确定模块105及拼接模块106。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,比程序更适合于描述车祸现场全景图像展示系统10在电子装置1中的执行过程,其中:
提取模块101,用于在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
本实施例中,一个车险理赔场景的车祸现场照片文件包包括了通过全景相机拍摄的车祸现场的一整套照片。系统在接收到一个车祸现场照片文件包后,首先提取所有车祸现场照片中的预设类型特征点(例如RootSIFT特征点),然后针对每一张车祸现场照片的每一个特征点,找出与该特征点的距离(例如欧式距离)最近的第一预设数量(例如4个)的邻近点,该特征点与其第一预设数量距离最近的邻近点组成为一个特征点集合,如此,以确定出每一张车祸现场照片的所有特征点集合。本实施例中,在提取了每一张车祸现场照片的预设类型特征点后,可以计算出每一张车祸现场照片的所有特征点两两之间的距离(例如欧氏距离),从而确定每一个特征点的第一预设数量的距 离最近的邻近点。
筛选模块102,用于逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
在得到所有车祸现场照片的特征点集合后,系统逐一选择车祸现场照片进行处理;在选择一个车祸现场照片后,基于各个车祸现场照片对应的特征点集合,并依据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片,该其他车祸现场照片即除去被选择的车祸现场照片外的剩余车祸现场照片。
第一计算模块103,用于将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
在筛选得到与该选择的车祸现场照片关联的其他车祸现场照片后,将筛选出的各个其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,即筛选出的每张其他车祸现场照片都与该选择的车祸现场照片进行一次组合形成一个分组;再通过预设类型算法分别计算出各个分组中的车祸现场照片对应的单应矩阵。本实施例中,该预设类型算法优选采用RANSAC(random sample consensus,随机抽样一致)算法。
第二计算模块104,用于基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
本实施例中,各个分组对应的照片匹配置信度的通过以下方式计算:一个分组的单应矩阵计算出来之后,该分组中的两张照片(即该选择的车祸现场照片与其关联的一张其他车祸现场照片)的重叠区域可以通过单应变换找出来,单应矩阵用RANSAC算法来估算,RANSAC算法过程可以返回一组内点(即匹配点),计算出该内点在两张照片的重叠区域中所占百分比即为该两张照片的匹配置信度。在得到各个分组的照片匹配置信度之后,取置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,该选择的车祸现场照片与其匹配照片为一个匹配照片对;本实施例中置信度最高的分组可能只有一个,也可能有多个,即多个分组的照片匹配置信度相同。
确定模块105,用于根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
在经过筛选模块102、第一计算模块104和第二计算模块105的处理,找出所有车祸现场照片的匹配照片(即得到所有车祸现场照片的匹配照片对)后,系统根据各个匹配照片对中各个车祸现场照片的特征点集合,确定出每一个匹配照片对中的各个车祸现场照片的拼接部位及拍摄顺序。例如,可以通过各个匹配照片对的单应矩阵确定各个匹配照片对中各个照片的拍摄顺序和拼接部位。
拼接模块106,用于根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
确定出各个匹配照片对的各个车祸现场照片的拍摄顺序和拼接部位后,依据各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接,从而得到车祸现场的全景画面。
本实施例技术方案,系统在接收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并确定每张车祸现场照片的特征点集合;再逐一的选择车祸现场照片进行处理,根据各个车祸现场照片的特征点集合,采用预设的筛选规则筛选出该选择的车祸现场照片关联的其他车祸现场照片,如此找出每一张车祸现场照片关联的其他车祸现场照片;针对每一张选择的车祸现场照片,将该选择的车祸现场照片逐一的与其关联的其他车祸现场照片进行两两分组,计算出各个分组对应的单应矩阵,再根据各个分组对应的单应矩阵计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片成为一个匹配照片对,如此得到各个选择的车祸现场照片对应的匹配照片对;接着根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位,依据确定的各个车祸现场照片的拍摄顺序和拼接部位对各个车祸现场照片进行拼接,如此即可得到车祸现场的全景图像进行展示。本方案通过系统在接收到险理赔场景的车祸现场照片文件包后,全自动的完成对车后现场照片文件包的所有车祸现场照片的拼接处理,并快速得出车祸现场的全景图像;相较于现有技术通过人工拼接照片而言,效率大幅提升,保证了对车险理赔业务处理的实时性,并且降低了人力成本。
优选地,本实施例的所述预设类型特征点为RootSIFT特征点,针对一个车祸现场照片,提取该车祸现场照片的预设类型特征点的方式如下:
(1)通过高斯滤波以及高斯差分构建该车祸现场照片的尺度空间,该尺度空间即DoG(Difference of Gaussian)空间;
(2)检测所述尺度空间的极值点,检测到的极值点成为关键点,也是潜在的特征点;
(3)拟合一个三维的二次函数来确定关键点的位置和尺度;
(4)利用关键点领域像素的梯度方向分布特性,为每个关键点指定方向参数,生成一个多维(例如128维)的方向参数向量,并生成描述算子;
(5)用SIFT表示该方向参数向量,并采用预设的计算公式将SIFT变换成RootSIFT,该RootSIFT也是一个多维的向量,该预设的计算公式为:RootSIFT=sqrt(SIFT/sum(SIFT),sqrt代表平方根函数。
优选地,本实施例中,所述预先确定的筛选规则为:
分别确定各个其他车祸现场照片的特征点集合中,与该选择的车祸现场 照片的特征点集合相同的匹配特征点集合的数量;
将各个其他车祸现场照片分别与该选择的车祸现场照片进行比对;每张其他车祸现场照片与该选择的车祸现场照片的比对为:将该其他车祸现场照片中的特征点集合去与该选择的车祸现场照片的特征点集合进行比较,找出与该选择的车祸现场照片的特征点集合相同的特征点集合(即匹配特征点集合),分别统计各个其他车祸现场照片的匹配特征点集合的数量。
若一个其他车祸现场照片对应的匹配特征点集合的数量大于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片相关联;
其他车祸现场照片的匹配特征点集合的数量越多,则说明该其他车祸现场照片与该选择的车祸现场照片存在的相同区域越多。第二预设数量为系统预设的作为判定其他车祸现场照片是否与该选择的车祸现场照片关联的阈值,例如,第二预设数量为4个;当一个其他车祸现场照片对应的匹配特征点集合的数量大于该第二预设数量,系统则判定该其他车祸现场照片不符合该选择的车祸现场照片的关联照片要求,确定该其他车祸现场照片为该选择的车祸现场照片的关联照片。
若一个其他车祸现场照片对应的匹配特征点集合的数量小于或者等于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片不相关联。
当一个其他车祸现场照片对应的匹配特征集合的数量小于或等于该第二预设数量时,系统则判定该其他车祸现场照片不符合该选择的车祸现场照片的关联照片要求,将该其他车祸现场照片确定为与该选择的车祸现场照片不关联。
当然,上述方案仅仅是本实施例优选的预先确定的筛选规则;在其它实施例中,还可采取其它的筛选规则,例如,将匹配特征集合数量的前预设名对应的其他车祸现场照片判定为该选择的车祸现场照片的关联照片,等等。
如图5所示,图5为本申请车祸现场全景图像展示系统二实施例的程序模块图。本实施例中,所述拼接模块106包括:
参数确定子模块1061,用于对各个匹配照片对中的各个车祸现场照片通过第一预设算法计算出预设类型图片调整参数;
所述第一预设算法可以为光束平差算法或其它相似算法。通过第一预设算法分别计算出各个匹配对中的各个车祸现场照片的预设类型图片调整参数;所述预设类型图片调整参数包括预设类型旋转矩阵(例如,三个欧拉角的旋转矩阵)和相机焦距。
调整子模块1062,用于通过第二预设算法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
所述第二预设算法可以为多带融合法(Multi-band blending),所述多带融合法包括:首先找出重叠区域,然后构建图像拉普拉斯金字塔,所述金字塔是降采样生成的,降采样指的是:一张图有很多像素,等间距采样,生成新的图。多带融合法就是不仅仅对图像本身操作,还要对图像金字塔里的图操 作,最后把这些金字塔里的图,通过扩展和叠加两种操作,生成一张融合好的图片。
拼接子模块1063,用于将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
将色差调整后的所有车祸现场照片根据各个车祸现场照片的拍摄顺序、各个车祸现场照片的拼接部位以及各个车祸现场照片对应的图片调整参数,将各个车祸现场照片进行拼接,拼接后则得到车祸现场的全景图像。
进一步地,本申请还提出一种计算机可读存储介质,所述计算机可读存储介质存储有车祸现场全景图像展示系统,所述车祸现场全景图像展示系统可被至少一个处理器执行,以使所述至少一个处理器执行上述任一实施例中的车祸现场全景图像展示方法。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是在本申请的发明构思下,利用本申请说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本申请的专利保护范围内。

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的车祸现场全景图像展示系统,所述车祸现场全景图像展示系统被所述处理器执行时实现如下步骤:
    在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
    逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
    将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
    基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
    根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
    根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
  2. 如权利要求1所述的电子装置,其特征在于,所述预设类型特征点为RootSIFT特征点,针对一个车祸现场照片,提取该车祸现场照片的预设类型特征点的步骤包括:
    (1)通过高斯滤波以及高斯差分构建该车祸现场照片的尺度空间;
    (2)检测所述尺度空间的极值点,检测到的极值点成为关键点;
    (3)拟合一个三维的二次函数来确定关键点的位置和尺度;
    (4)利用关键点领域像素的梯度方向分布特性,为每个关键点指定方向参数,生成一个多维的方向参数向量,并生成描述算子;
    (5)用SIFT表示该方向参数向量,并采用预设的计算公式将SIFT变换成RootSIFT,该RootSIFT也是一个多维的向量,该预设的计算公式为:RootSIFT=sqrt(SIFT/sum(SIFT),sqrt代表平方根函数。
  3. 如权利要求1所述的电子装置,其特征在于,所述预先确定的筛选规则为:
    分别确定各个其他车祸现场照片的特征点集合中,与该选择的车祸现场照片的特征点集合相同的匹配特征点集合的数量;
    若一个其他车祸现场照片对应的匹配特征点集合的数量大于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片相关联;
    若一个其他车祸现场照片对应的匹配特征点集合的数量小于或者等于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片不相关联。
  4. 如权利要求1所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  5. 如权利要求2所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  6. 如权利要求3所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  7. 一种车祸现场全景图像展示方法,其特征在于,该方法包括步骤:
    在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
    逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
    将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单 应矩阵;
    基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
    根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
    根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
  8. 如权利要求7所述的电子装置,其特征在于,所述预设类型特征点为RootSIFT特征点,针对一个车祸现场照片,提取该车祸现场照片的预设类型特征点的步骤包括:
    (1)通过高斯滤波以及高斯差分构建该车祸现场照片的尺度空间;
    (2)检测所述尺度空间的极值点,检测到的极值点成为关键点;
    (3)拟合一个三维的二次函数来确定关键点的位置和尺度;
    (4)利用关键点领域像素的梯度方向分布特性,为每个关键点指定方向参数,生成一个多维的方向参数向量,并生成描述算子;
    (5)用SIFT表示该方向参数向量,并采用预设的计算公式将SIFT变换成RootSIFT,该RootSIFT也是一个多维的向量,该预设的计算公式为:RootSIFT=sqrt(SIFT/sum(SIFT),sqrt代表平方根函数。
  9. 如权利要求7所述的电子装置,其特征在于,所述预先确定的筛选规则为:
    分别确定各个其他车祸现场照片的特征点集合中,与该选择的车祸现场照片的特征点集合相同的匹配特征点集合的数量;
    若一个其他车祸现场照片对应的匹配特征点集合的数量大于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片相关联;
    若一个其他车祸现场照片对应的匹配特征点集合的数量小于或者等于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片不相关联。
  10. 如权利要求7所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  11. 如权利要求8所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场 照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  12. 如权利要求9所述的电子装置,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有车祸现场全景图像展示系统,所述车祸现场全景图像展示系统可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:
    在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
    逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
    将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
    基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
    根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
    根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
  14. 如权利要求13所述的计算机可读存储介质,其特征在于,所述预设类型特征点为RootSIFT特征点,针对一个车祸现场照片,提取该车祸现场照片的预设类型特征点的步骤包括:
    (1)通过高斯滤波以及高斯差分构建该车祸现场照片的尺度空间;
    (2)检测所述尺度空间的极值点,检测到的极值点成为关键点;
    (3)拟合一个三维的二次函数来确定关键点的位置和尺度;
    (4)利用关键点领域像素的梯度方向分布特性,为每个关键点指定方向参数,生成一个多维的方向参数向量,并生成描述算子;
    (5)用SIFT表示该方向参数向量,并采用预设的计算公式将SIFT变换成RootSIFT,该RootSIFT也是一个多维的向量,该预设的计算公式为:RootSIFT=sqrt(SIFT/sum(SIFT),sqrt代表平方根函数。
  15. 如权利要求13所述的计算机可读存储介质,其特征在于,所述预先确定的筛选规则为:
    分别确定各个其他车祸现场照片的特征点集合中,与该选择的车祸现场照片的特征点集合相同的匹配特征点集合的数量;
    若一个其他车祸现场照片对应的匹配特征点集合的数量大于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片相关联;
    若一个其他车祸现场照片对应的匹配特征点集合的数量小于或者等于第二预设数量,则确定该其他车祸现场照片与该选择的车祸现场照片不相关联。
  16. 如权利要求13所述的计算机可读存储介质,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  17. 如权利要求14所述的计算机可读存储介质,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  18. 如权利要求15所述的计算机可读存储介质,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  19. 如权利要求15所述的计算机可读存储介质,其特征在于,所述根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接的步骤包括:
    对各个匹配照片对中的各个车祸现场照片通过光束平差算法计算出预设类型图片调整参数;
    通过多带融合法对各个匹配照片对中各个车祸现场照片的图像色差进行调整;
    将色差调整后的各个匹配照片对中的各个车祸现场照片,根据对应的拍摄顺序、拼接部位和图片调整参数进行拼接。
  20. 一种车祸现场全景图像展示系统,其特征在于,包括:
    提取模块,用于在收到一个车险理赔场景的车祸现场照片文件包后,提取每张车祸现场照片的预设类型特征点,并找出每个所述特征点的第一预设数量距离最近的邻近点,每个特征点与其第一预设数量距离最近的邻近点为一个特征点集合;
    筛选模块,用于逐一选择车祸现场照片,在一张车祸现场照片被选择后,基于所有车祸现场照片对应的特征点集合,并根据预先确定的筛选规则筛选出与该选择的车祸现场照片关联的其他车祸现场照片;
    将筛选出的其他车祸现场照片分别与该选择的车祸现场照片进行两两分组,采用预设类型算法分别计算出各个所述分组中的车祸现场照片对应的单应矩阵;
    第一计算模块,用于基于各个所述分组对应的单应矩阵,计算出各个所述分组对应的照片匹配置信度,将置信度最高的分组中的其他车祸现场照片作为该选择的车祸现场照片的匹配照片,所述置信度最高的分组中的其他车祸现场照片与该选择的车祸现场照片为一个匹配照片对;
    第二计算模块,用于根据各个匹配照片对中各个车祸现场照片的特征点集合,确定各个匹配照片对中各个车祸现场照片的拍摄顺序和拼接部位;
    确定模块,用于根据确定的各个车祸现场照片的拍摄顺序和拼接部位,对各个匹配照片对中的各个车祸现场照片进行拼接。
PCT/CN2017/113725 2017-10-27 2017-11-30 电子装置、车祸现场全景图像展示方法和存储介质 WO2019080257A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711025267.6 2017-10-27
CN201711025267.6A CN108022211A (zh) 2017-10-27 2017-10-27 电子装置、车祸现场全景图像展示方法和存储介质

Publications (1)

Publication Number Publication Date
WO2019080257A1 true WO2019080257A1 (zh) 2019-05-02

Family

ID=62080297

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/113725 WO2019080257A1 (zh) 2017-10-27 2017-11-30 电子装置、车祸现场全景图像展示方法和存储介质

Country Status (2)

Country Link
CN (1) CN108022211A (zh)
WO (1) WO2019080257A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310440A (zh) * 2023-03-16 2023-06-23 中国华能集团有限公司北京招标分公司 一种规则引擎使用方法
CN116611963A (zh) * 2023-05-23 2023-08-18 中建安装集团有限公司 一种基于物联网的工程数据监测分析系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172413A1 (en) * 2001-04-03 2002-11-21 Chen George Q. Methods and apparatus for matching multiple images
CN105678622A (zh) * 2016-01-07 2016-06-15 平安科技(深圳)有限公司 车险理赔照片的分析方法及系统
CN106331668A (zh) * 2016-08-03 2017-01-11 Tcl集团股份有限公司 一种多投影的图像显示方法及其系统
CN106455956A (zh) * 2014-06-01 2017-02-22 王康怀 通过置信度匹配重建来自体内多相机胶囊的图像

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456022B (zh) * 2013-09-24 2016-04-06 中国科学院自动化研究所 一种高分辨率遥感图像特征匹配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172413A1 (en) * 2001-04-03 2002-11-21 Chen George Q. Methods and apparatus for matching multiple images
CN106455956A (zh) * 2014-06-01 2017-02-22 王康怀 通过置信度匹配重建来自体内多相机胶囊的图像
CN105678622A (zh) * 2016-01-07 2016-06-15 平安科技(深圳)有限公司 车险理赔照片的分析方法及系统
CN106331668A (zh) * 2016-08-03 2017-01-11 Tcl集团股份有限公司 一种多投影的图像显示方法及其系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310440A (zh) * 2023-03-16 2023-06-23 中国华能集团有限公司北京招标分公司 一种规则引擎使用方法
CN116611963A (zh) * 2023-05-23 2023-08-18 中建安装集团有限公司 一种基于物联网的工程数据监测分析系统及方法
CN116611963B (zh) * 2023-05-23 2024-05-24 中建安装集团有限公司 一种基于物联网的工程数据监测分析系统及方法

Also Published As

Publication number Publication date
CN108022211A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
US8487926B2 (en) Method and apparatus for generating 3D image using 2D photograph images
US9754192B2 (en) Object detection utilizing geometric information fused with image data
JP6322126B2 (ja) 変化検出装置、変化検出方法、および、変化検出プログラム
JP2017108401A5 (ja) スマートフォンベースの方法、スマートフォン及びコンピュータ可読媒体
CN111915483B (zh) 图像拼接方法、装置、计算机设备和存储介质
JP6293386B2 (ja) データ処理装置、データ処理方法及びデータ処理プログラム
JP2014071850A (ja) 画像処理装置、端末装置、画像処理方法、およびプログラム
JP2008513852A5 (zh)
JP2017103748A (ja) 画像処理方法およびプログラム
JP2018026064A (ja) 画像処理装置、画像処理方法、システム
CN112348885A (zh) 视觉特征库的构建方法、视觉定位方法、装置和存储介质
JP5656768B2 (ja) 画像特徴量抽出装置およびそのプログラム
KR101868740B1 (ko) 파노라마 이미지 생성 방법 및 장치
CN113807451A (zh) 全景图像特征点匹配模型的训练方法、装置以及服务器
CN108229281B (zh) 神经网络的生成方法和人脸检测方法、装置及电子设备
CN110310325B (zh) 一种虚拟测量方法、电子设备及计算机可读存储介质
WO2019080257A1 (zh) 电子装置、车祸现场全景图像展示方法和存储介质
CN112102404B (zh) 物体检测追踪方法、装置及头戴显示设备
Schaffland et al. An interactive web application for the creation, organization, and visualization of repeat photographs
US20180182169A1 (en) Marker for augmented reality employing a trackable marker template
JP2006113832A (ja) ステレオ画像処理装置およびプログラム
CN115063485B (zh) 三维重建方法、装置及计算机可读存储介质
US10783649B2 (en) Aligning digital images by selectively applying pixel-adjusted-gyroscope alignment and feature-based alignment models
CN109034214B (zh) 用于生成标记的方法和装置
JP6304815B2 (ja) 画像処理装置ならびにその画像特徴検出方法、プログラムおよび装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 24/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17929705

Country of ref document: EP

Kind code of ref document: A1