WO2020042800A1 - 用于辅助车辆定损图像拍摄的方法、装置及设备 - Google Patents

用于辅助车辆定损图像拍摄的方法、装置及设备 Download PDF

Info

Publication number
WO2020042800A1
WO2020042800A1 PCT/CN2019/096321 CN2019096321W WO2020042800A1 WO 2020042800 A1 WO2020042800 A1 WO 2020042800A1 CN 2019096321 W CN2019096321 W CN 2019096321W WO 2020042800 A1 WO2020042800 A1 WO 2020042800A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
shooting
detection model
component
image
Prior art date
Application number
PCT/CN2019/096321
Other languages
English (en)
French (fr)
Inventor
张泰玮
周凡
周大江
鲁志红
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020042800A1 publication Critical patent/WO2020042800A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • This specification relates to the technical field of data processing, and in particular, to a method, a device, and a device for assisting vehicle fixed-loss image capture.
  • the core basis material is the vehicle's fixed-loss image.
  • the vehicle's fixed damage image is usually obtained by taking pictures on the scene of the operator, and then the vehicle's fixed damage process is performed according to the photos taken at the scene.
  • Vehicle damage damage images are often required to be able to clearly reflect the damage to the vehicle. This usually requires the cameraman to have the relevant knowledge of vehicle damage damage in order to capture and obtain images that meet the requirements for damage damage processing.
  • the fixed-loss image of the vehicle may not meet the requirements for fixed-loss image processing. Different fixed-loss images of vehicles can obtain different fixed-loss results.
  • this specification provides a method, a device, and a device for assisting vehicle fixed-loss image capturing.
  • a method for assisting vehicle fixed-loss image capture including:
  • Identify the components of the target vehicle in the image and detect the relative pose of the camera module and the target vehicle based on at least the obtained component information to obtain pose information, the pose information including one of shooting distance information and shooting angle information.
  • the component information includes: a component position, a component size, and a component identification, and / or, the shooting distance information is a distance range to which the distance between the camera module and the target vehicle belongs, and / or, the shooting angle The information is the angle range to which the shooting angle belongs.
  • the component information is obtained based on recognition of the image using a preset component detection model; the component detection model is obtained based on training an initial component detection model using first training sample data;
  • the sample features include sample images, and the sample labels include component information of vehicle components in the sample images.
  • the shooting distance information is based on: using output data of the component detection model and the image as input data of a preset distance detection model, and obtaining the prediction by using the distance detection model; the The distance detection model is obtained by training the initial distance detection model using the second training sample data.
  • the sample features include sample images, component information of vehicle parts in the sample images, and the sample labels include shooting distance information.
  • the shooting angle information is based on output data of the component detection model, output data of the distance detection model, and the image as input data of a preset angle detection model, and uses the angle detection
  • the model is obtained by prediction;
  • the angle detection model is obtained by training the initial angle detection model by using the third training sample data;
  • the sample features include sample images, component information of vehicle parts in the sample images, and shooting Distance information, sample labels include shooting angle information.
  • the initial component detection model, the initial distance detection model, and the initial angle detection model each include a MobileNets model.
  • the shooting in a desired posture includes: performing one or more of panoramic shooting, close-up shooting, mid-range shooting, and the like at a specified shooting angle.
  • panoramic shooting, mid-range shooting, and close-up shooting are based on the shooting distance. Divide them in descending order.
  • a device for assisting vehicle fixed-loss image capture including:
  • An image acquisition module configured to: acquire an image collected by a camera module
  • An information detection module configured to identify components of the target vehicle in the image, and detect the relative pose of the camera module and the target vehicle based on the identified component information to obtain pose information, the pose information including shooting distance information And one or more of the shooting angle information;
  • An information reminder module is configured to output reminder information for guiding a user to control the camera module to shoot a target vehicle in a desired posture based on a comparison result obtained by comparing the posture information with preset desired posture information.
  • the component information includes: a component position, a component size, and a component identification, and / or, the shooting distance information is a distance range to which the distance between the camera module and the target vehicle belongs, and / or, the shooting angle The information is the angle range to which the shooting angle belongs.
  • the component information is obtained based on recognition of the image using a preset component detection model; the component detection model is obtained based on training an initial component detection model using first training sample data;
  • the sample features include sample images, and the sample labels include component information of vehicle components in the sample images.
  • the shooting distance information is based on: using output data of the component detection model and the image as input data of a preset distance detection model, and obtaining the prediction by using the distance detection model; the The distance detection model is obtained by training the initial distance detection model using the second training sample data.
  • the sample features include sample images, component information of vehicle parts in the sample images, and the sample labels include shooting distance information.
  • the shooting angle information is based on output data of the component detection model, output data of the distance detection model, and the image as input data of a preset angle detection model, and uses the angle detection
  • the model is obtained by prediction;
  • the angle detection model is obtained by training the initial angle detection model by using the third training sample data;
  • the sample features include sample images, component information of vehicle parts in the sample images, and shooting Distance information, sample labels include shooting angle information.
  • the initial component detection model, the initial distance detection model, and the initial angle detection model each include a MobileNets model.
  • the shooting in a desired posture includes: performing one or more of panoramic shooting, close-up shooting, and mid-range shooting at a specified shooting angle.
  • the shooting distance is divided in descending order.
  • a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the program as follows method:
  • Identify the components of the target vehicle in the image and detect the relative pose of the camera module and the target vehicle based on at least the obtained component information to obtain pose information, the pose information including one of shooting distance information and shooting angle information.
  • the components of the target vehicle in the image are identified by acquiring the images collected by the camera module, and the relative pose of the camera module and the target vehicle is detected based on at least the component information obtained by the recognition.
  • One or more pose information based on a comparison result obtained by comparing the obtained pose information with preset desired pose information, outputting a reminder for guiding a user to control the camera module to shoot the target vehicle in the desired pose Information to realize feedback on the pose information of the camera module, which can guide the photographer to adjust the shooting mode and improve the shooting quality of the vehicle's fixed-loss image.
  • Fig. 1 is a diagram illustrating an application scenario for capturing a fixed-loss image of a vehicle according to an exemplary embodiment of the present specification.
  • Fig. 2 is a flow chart showing a method for assisting a fixed-loss image capture of a vehicle according to an exemplary embodiment of the present specification.
  • Fig. 3A is a flow chart showing another method for assisting vehicle fixed-loss image capturing according to an exemplary embodiment of the present specification.
  • Fig. 3B is an example of an application for assisting vehicle fixed-loss image capturing according to an exemplary embodiment of the present specification.
  • Fig. 4 is a hardware structural diagram of a computer device in which a device for assisting capturing a fixed-loss image of a vehicle is shown according to an exemplary embodiment of the present specification.
  • Fig. 5 is a block diagram of a device for assisting vehicle fixed-loss image capturing according to an exemplary embodiment of the present specification.
  • first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • word “if” as used herein can be interpreted as “at” or "when” or "in response to determination”.
  • the vehicle fixed-loss image is one of the vehicle damage assessment materials.
  • the vehicle fixed-loss image can be an image used to perform fixed-core nuclear loss on an out-of-risk vehicle. It is often obtained by the operator or the owner taking pictures of the scene. In order to clearly reflect the information about the specific parts of the vehicle, the damaged parts, the type of damage, and the degree of damage, the quality of the vehicle's fixed damage image often has strict requirements. With the rapid development of mobile terminals, users can use a mobile terminal with a shooting function to shoot at any time.
  • FIG. 1 it is an application scenario diagram of capturing a fixed-loss image of a vehicle according to an exemplary embodiment of the present specification.
  • Users can use mobile phones to take pictures of damaged vehicles and obtain fixed-loss images of vehicles.
  • non-professionals photographing damaged vehicles may obtain fixed-loss images of vehicles that do not meet the requirements for fixed-loss image processing.
  • the fixed-loss images of vehicles with different shooting quality will directly affect the final fixed-loss results. Therefore, it is necessary to provide a processing solution that can improve the shooting quality of fixed-loss images of vehicles.
  • the embodiments of the present specification provide a solution for assisting vehicle fixed-loss image shooting.
  • the photographer can be guided to adjust the shooting method, improve the shooting quality of the vehicle's fixed-loss image, and provide a process for the subsequent fixed-loss loss compensation claim.
  • FIG. 2 it is a flowchart illustrating a method for assisting vehicle fixed-loss image shooting according to an exemplary embodiment of the present specification.
  • the method includes:
  • step 202 an image collected by the camera module is acquired
  • step 204 components of the target vehicle in the image are identified, and a relative pose of the camera module and the target vehicle is detected based on at least the component information obtained by the recognition to obtain pose information, where the pose information includes shooting distance information and shooting One or more of the angle information;
  • step 206 based on a comparison result obtained by comparing the pose information with the preset desired pose information, output reminder information for guiding the user to control the camera module to shoot the target vehicle in the desired pose.
  • the acquired image may be a stored image, for example, an image saved by being triggered by a photographing control; the acquired image may also be an image currently captured by the camera module but not yet stored.
  • the frame collected by the camera module is processed at a preset frequency to capture the image collected by the camera module.
  • the preset frequency may be twice a second.
  • the image collected by the camera module may also be acquired in real time, so as to detect the current lens shooting picture of the user in real time, and determine whether the user needs to adjust the shooting posture.
  • the purpose of the embodiment of this specification is to use the currently acquired image to determine the relative pose of the camera module and the target vehicle, that is, the relative relationship between the camera module and the target vehicle.
  • the relative pose can be the shooting distance or the shooting angle.
  • the determined shooting distance or shooting angle is compared with the desired shooting distance or shooting angle, and the reminder information to guide the user to shoot is output according to the comparison result, thereby achieving Real-time feedback on the shooting information of the camera module is used to correct the problem of poor picture quality caused by improper shooting methods.
  • this embodiment may first identify the component information on the target vehicle in the image.
  • the component may be an integral part of a target vehicle.
  • the component information may be information describing the component.
  • the component information may include a component location, a component size, and a component identification.
  • the part position may be the position of a part in the image.
  • the part size can be the size of the part in the image.
  • the component identifier may be an identifier used to distinguish different components on the target vehicle, for example, the component identifier may be a component name code.
  • the component size and the component position may be marked by a component rectangular frame, and the component rectangular frame position coordinate information is used to represent the component size and the component position.
  • the component information may further include information such as the shape of the component, which is not described in detail here.
  • the following uses a method for identifying component information as an example for illustration.
  • the component information is obtained by recognizing the image by using a preset component detection model; the component detection model is obtained by training an initial component detection model using first training sample data; in the first training sample data, samples
  • the feature includes a sample image, and the sample label includes component information of a vehicle part in the sample image. Vehicle component size, component location, and component identification.
  • the embodiment of the present specification uses the sample image as a sample feature, and uses the component information of the vehicle component in the sample image as a sample label to construct the first training sample data, and uses the training sample data to train the initial component detection model to obtain Preset part detection model.
  • other component information may also be used as a sample label.
  • component related information such as component size, component damage status, component damage degree, component damage size, and component damage location may be used. This type is used as a sample label, so that the component detection model can be used to predict more component information to achieve better guidance and improvement.
  • This embodiment uses supervised learning to train and obtain a component detection model.
  • a deep learning model can be used as the initial component detection model, and especially the MobileNets model is used to construct the initial component detection model, which is guaranteed under the condition of ensuring accuracy
  • real-time calculation feedback can also be achieved in low-end models such as mobile terminals, which can provide a better user experience.
  • MobileNets is based on a streamlined architecture that uses deep separable convolutions to build lightweight deep neural networks.
  • the global hyperparameters introduced effectively balance the delay and accuracy. Hyperparameters allow model builders to choose a model of the appropriate size for their application based on the constraints of the problem.
  • the component detection model can be used to predict the component information of the target vehicle in the image, thereby improving the efficiency of obtaining the component information.
  • the relative poses of the shooting device and the target vehicle such as the shooting distance and the shooting angle, can be predicted.
  • the shooting distance information may be information used to describe the distance between the camera module and the target vehicle.
  • the shooting distance information may be a distance value.
  • the shooting distance information may be a distance range to which the distance between the camera module and the target vehicle belongs. The distance relationship between the camera module and the target vehicle is described by the distance range, so that the distance comparison can be performed by using the distance range, which can improve the comparison efficiency and reduce the difficulty of distance detection.
  • the range of distances can be expressed as a range of numbers.
  • the distance range may be represented by a distance level. For example, different distance ranges may be represented by three levels of far, middle, and near, which may correspond to panorama, middle, and close range. For example, when the determined distance range is different from the desired distance range, a distance adjustment reminder may be performed.
  • the shooting distance information is based on: using output data of the component detection model and the image as input data of a preset distance detection model, and obtaining the prediction by using the distance detection model; the The distance detection model is obtained by training the initial distance detection model using the second training sample data.
  • the sample features include sample images, component information of vehicle parts in the sample images, and the sample labels include shooting distance information.
  • the embodiment of the present specification uses the sample image and the component information of the vehicle components in the sample image as the sample features and the shooting distance information as the sample label to construct the second training sample data. Then, the initial distance detection model is trained by using the second training sample data to obtain a preset distance detection model.
  • the sample image is used as the sample feature
  • the component information is used as the sample feature, which can improve the prediction result of the distance detection model.
  • This embodiment uses a supervised learning method to obtain a distance detection model.
  • a deep learning model can be used as the initial distance detection model, and especially the MobileNets model is used to construct the initial distance detection model.
  • the output data of the component detection model and the image can be input into a preset distance detection model, and the data result of the distance detection model can be used as the shooting distance information, so that the distance detection model can be used to predict the shooting distance information and improve the shooting distance information. Accuracy.
  • the shooting angle information may be relative angle information between the mirror surface of the camera module and the target vehicle.
  • the shooting angle information may be a specific shooting angle value.
  • the shooting angle information may be an angle range to which the shooting angle belongs.
  • the shooting angle information may be represented by a down shot, an oblique shot, an upward shot, a forward shot, and the like.
  • the angular range can be expressed by a specific range value.
  • the shooting angle information is based on output data of the component detection model, output data of the distance detection model, and the image as input data of a preset angle detection model, and uses the angle detection
  • the model is obtained by prediction;
  • the angle detection model is obtained by training the initial angle detection model by using the third training sample data;
  • the sample features include sample images, component information of vehicle parts in the sample images, and shooting Distance information, sample labels include shooting angle information.
  • the embodiment of the present specification uses the sample image, the component information of the vehicle components in the sample image, and the shooting distance information as the sample features, and uses the shooting angle information as the sample label to construct the third training sample data.
  • the third distance training sample data is used to train the initial distance detection model to obtain a preset angle detection model.
  • the sample image is used as the sample feature, but also the component information and the shooting distance information are used as the sample feature, which can improve the prediction result of the angle detection model.
  • This embodiment uses a supervised learning method to obtain an angle detection model.
  • a deep learning model may be used as the initial angle detection model, and especially the MobileNets model is used to construct the initial angle detection model.
  • the output data of the component detection model, the output data of the distance detection model, and the angle detection model obtained by the image input training can be used as the shooting angle information to realize the prediction shooting using the angle detection model.
  • Angle information to improve the accuracy of shooting angle information.
  • the core full link model uses a mobile deep learning model, which ensures the model calculation efficiency while ensuring accuracy, and can also provide real-time calculation feedback in low-end and mid-range models, which can provide a comparative Good user experience.
  • a model may be used to estimate the shooting distance information and shooting angle information of the camera module and the target vehicle, which are not described in detail here.
  • the pose information can be compared with preset expected pose information, and a reminder for guiding the user to control the camera module to shoot the target vehicle in the desired pose can be output according to the comparison result information.
  • the desired pose may be a relative relationship between the camera module and the target vehicle in a case where the desired pose information is satisfied. If the shooting distance does not belong to the desired shooting distance, the user may be reminded to control the camera module to shoot the target vehicle at the desired shooting distance; if the shooting angle does not belong to the desired shooting angle, the user may be reminded to control the camera module to shoot the target vehicle at the desired angle.
  • the shooting distance is greater than the desired shooting distance, the user may be reminded to control the camera model to approach the target vehicle.
  • the shooting distance information is “close” and the desired shooting distance information is “medium”, the user may be reminded to control the camera model to be slightly away from the target vehicle and the like.
  • the shooting in a desired posture includes one or more of close-up shooting, mid-range shooting, and panoramic shooting at a specified shooting angle.
  • the shooting distance corresponding to the panorama shooting is greater than the shooting distance corresponding to the middle shooting, and the shooting distance corresponding to the middle shooting is greater than the shooting distance corresponding to the close shooting.
  • the specified shooting angle can be a 45-degree angle or the like.
  • a reminder message “Please take a 45-degree panoramic picture of the vehicle and ensure that you can see the license plate” according to the comparison result; for another example, when performing a close-up shot, output “Please be closer, Shooting the details of the loss of the vehicle, let me see the degree of loss "reminder; for example, when performing a mid-range shooting, according to the comparison results, output” Please take two steps back, take a picture of the damage to the vehicle, and let me see the loss overview ".
  • the output method of the reminder information may be a voice broadcast, a text reminder, and further, a picture reminder.
  • a voice broadcast For example, an image of a sample vehicle is taken in a desired posture so as to remind the user intuitively using the image.
  • an instruction for shooting the target vehicle in a desired posture may be output before acquiring the image.
  • the purpose of outputting the instruction instruction is to remind the user of the type of image to be captured in order to obtain various types of vehicle fixed-loss images.
  • the instruction instruction may include one or more of a panorama shooting instruction, a medium shot shooting instruction, a close shot shooting instruction, a specified angle shooting instruction, and the like.
  • FIG. 3A it is a flowchart of another method for assisting vehicle fixed-loss image capturing according to an exemplary embodiment shown in this specification.
  • the camera module After the camera module is turned on, you can enter the camera mode.
  • an instruction instruction for shooting a target vehicle in a desired posture may be output, for example, the instruction instruction includes one of a panorama shooting instruction, a mid-range shooting instruction, a close-up shooting instruction, a specified angle shooting instruction, or Multiple.
  • the shooting component performs frame cutting processing at a preset frequency, and for each frame of the captured image, the component's built-in model is called for processing.
  • the image is input into a part detection model to perform part recognition on the image to obtain part information.
  • the output data of the component detection model and the image are input to a preset distance detection model to obtain shooting distance information.
  • the output data of the component detection model, the output data of the distance detection model, and the angle detection model obtained by training the image input are used to obtain shooting angle information.
  • the conclusion is output through the fusion model.
  • the shooting component outputs user prompts according to the rules, prompting the user to adjust the distance (near or far) and the shooting angle.
  • the acquired images may also be calculated in real time and prompt feedback may be generated.
  • FIG. 3B it is an application example for assisting vehicle fixed-loss image shooting according to an exemplary embodiment.
  • an example map can be provided to remind the user to shoot according to the example map when they do not know how to shoot.
  • FIG. 3B is illustrated by an example diagram in the panorama.
  • the user is required to take three types of images, one is a panoramic image, one is a close-up image, and one is a mid-range image.
  • Corresponding expected pose information is configured for different image types. The user opens the Dingbao application and enters the vehicle panoramic shooting scene.
  • a reminder message is output: Please take a 45-degree panorama picture of the vehicle and ensure that you can see the license plate. After detecting that the user has completed the shooting action under the guidance of the prompt, he enters the vehicle's mid-range shooting scene. In the mid-range shooting scene, if the shooting distance obtained based on the current captured image detection is less than the expected shooting distance of the mid-range shooting, a reminder message is output: Please take two steps back to shoot the damaged part of the vehicle and let me see the loss overview. After detecting that the user has completed the shooting action under the guidance of the prompt, enter the vehicle close-up shooting scene.
  • the shooting distance obtained based on the current captured image detection is greater than the expected shooting distance for close-up shooting, output a reminder message: Please get closer and shoot the details of the vehicle loss, let me see the extent of the loss.
  • the captured image can be displayed for preview by the user, and a fixed-loss image submission operation can be performed when the submission control is touched. It can be understood that corresponding reminder information can also be output according to other comparison results, which are not described in detail here.
  • the mobile terminal's AI model computing power is used, and the calculation results of components, distances, and shooting angles are combined to output accurate position feedback information such as shooting distance and shooting angle, which can guide users to adjust the shooting mode. , To produce a higher-quality vehicle damage image.
  • this specification also provides embodiments of a device for assisting vehicle fixed-loss image capturing and an electronic device to which the device is applied.
  • the embodiments of the device for assisting the vehicle with fixed-loss image capturing in this specification can be applied to computer equipment.
  • the device embodiments may be implemented by software, or by hardware or a combination of software and hardware.
  • software implementation as an example, as a device in a logical sense, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory and running it through the processor of the computer equipment in which it is located.
  • FIG. 4 this is a hardware structure diagram of the computer equipment where the device for assisting vehicle fixed-loss image shooting is located in this specification, except for the processor 410, network interface 420, and memory shown in FIG. 4.
  • the computer equipment in the embodiment where the device 431 for assisting the shooting of a fixed-loss image of a vehicle is located may generally include other hardware according to the actual functions of the equipment, and details are not described herein again.
  • FIG. 5 it is a block diagram of a device for assisting a vehicle to take a fixed-loss image according to an exemplary embodiment of the present specification.
  • the device includes:
  • An image acquisition module 52 configured to: acquire an image collected by a camera module
  • An information detection module 54 is configured to identify a component of the target vehicle in the image, and detect a relative pose of the camera module and the target vehicle based on at least the component information obtained by the recognition to obtain pose information, where the pose information includes a shooting distance One or more of information and shooting angle information;
  • the information reminding module 56 is configured to output, based on a comparison result obtained by comparing the pose information and the preset desired pose information, reminder information for guiding a user to control the camera module to shoot the target vehicle in the desired pose.
  • the component information includes: a component position, a component size, and a component identification, and / or, the shooting distance information is a distance range to which the distance between the camera module and the target vehicle belongs, and / or, the shooting angle The information is the angle range to which the shooting angle belongs.
  • the component information is obtained based on recognition of the image using a preset component detection model; the component detection model is obtained based on training an initial component detection model using first training sample data;
  • the sample features include sample images, and the sample labels include component information of vehicle components in the sample images.
  • the shooting distance information is based on: using output data of the component detection model and the image as input data of a preset distance detection model, and obtaining the prediction by using the distance detection model; the The distance detection model is obtained by training the initial distance detection model using the second training sample data.
  • the sample features include sample images, component information of vehicle parts in the sample images, and the sample labels include shooting distance information.
  • the shooting angle information is based on output data of the component detection model, output data of the distance detection model, and the image as input data of a preset angle detection model, and uses the angle detection
  • the model is obtained by prediction;
  • the angle detection model is obtained by training the initial angle detection model by using the third training sample data;
  • the sample features include sample images, component information of vehicle parts in the sample images, and shooting Distance information, sample labels include shooting angle information.
  • the initial component detection model, the initial distance detection model, and the initial angle detection model each include a MobileNets model.
  • the shooting in a desired posture includes: performing one or more of panoramic shooting, close-up shooting, and mid-range shooting at a specified shooting angle.
  • the shooting distance is divided in descending order.
  • the relevant part may refer to the description of the method embodiment.
  • the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Those of ordinary skill in the art can understand and implement without creative efforts.
  • an embodiment of the present specification further provides a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when the program is executed:
  • Identify the components of the target vehicle in the image and detect the relative pose of the camera module and the target vehicle based on at least the obtained component information to obtain pose information, the pose information including one of shooting distance information and shooting angle information.
  • a computer storage medium stores program instructions in the storage medium, and the program instructions include:
  • Identify the components of the target vehicle in the image and detect the relative pose of the camera module and the target vehicle based on at least the obtained component information to obtain pose information, the pose information including one of shooting distance information and shooting angle information.
  • the embodiments of the present specification may take the form of a computer program product implemented on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing program code.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and information can be stored by any method or technology.
  • Information may be computer-readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media may be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technologies
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • Magnetic tape cartridges magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media may be used to store information that can be accessed

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本说明书实施例提供一种用于辅助车辆定损图像拍摄的方法、装置及设备,本说明书实施例,通过获取摄像模块采集的图像,识别图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得包括拍摄距离信息和拍摄角度信息中一种或多种的位姿信息,基于将所获得的位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息,可以指导拍摄人员调整拍摄方式。

Description

用于辅助车辆定损图像拍摄的方法、装置及设备 技术领域
本说明书涉及数据处理技术领域,尤其涉及用于辅助车辆定损图像拍摄的方法、装置及设备。
背景技术
在车险行业,车主发生车辆事故提出理赔申请时,保险公司需要对车辆的损伤程度进行评估,以确定需要修复的项目清单以及赔付金额。目前,对出险车辆进行定损过程中,核心的依据材料为车辆定损图像。
目前车辆定损图像通常是由作业人员现场进行拍照获得,然后根据现场拍摄的照片进行车辆定损处理。车辆定损图像往往被要求能够清楚的反应出车辆受损情况,这通常需要拍照人员具有车辆定损的相关知识,才能拍摄获取符合定损处理要求的图像。然而实际场景中,往往由车主主动或在保险公司作业人员要求下进行拍摄,获得的车辆定损图像可能不符合定损图像处理要求。不同拍摄质量的车辆定损图像,可以获得不同定损结果。
发明内容
为克服相关技术中存在的问题,本说明书提供了用于辅助车辆定损图像拍摄的方法、装置及设备。
根据本说明书实施例的第一方面,提供一种用于辅助车辆定损图像拍摄的方法,所述方法包括:
获取摄像模块采集的图像;
识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
在一个实施例中,所述部件信息包括:部件位置、部件大小和部件标识,和/或,所 述拍摄距离信息为摄像模块与目标车辆的距离所属距离范围,和/或,所述拍摄角度信息为拍摄角度所属角度范围。
在一个实施例中,所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。
在一个实施例中,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标签包括拍摄距离信息。
在一个实施例中,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
在一个实施例中,所述初始部件检测模型、所述初始距离检测模型以及初始角度检测模型分别包括MobileNets模型。
在一个实施例中,所述以期望位姿拍摄包括:以指定拍摄角度进行全景拍摄、近景拍摄、中景拍摄等中的一种或多种,全景拍摄、中景拍摄和近景拍摄按拍摄距离从大到小的顺序进行划分。
根据本说明书实施例的第二方面,提供一种用于辅助车辆定损图像拍摄的装置,所述装置包括:
图像获取模块,用于:获取摄像模块采集的图像;
信息检测模块,用于:识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
信息提醒模块,用于:基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
在一个实施例中,所述部件信息包括:部件位置、部件大小和部件标识,和/或,所述拍摄距离信息为摄像模块与目标车辆的距离所属距离范围,和/或,所述拍摄角度信息为拍摄角度所属角度范围。
在一个实施例中,所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。
在一个实施例中,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标签包括拍摄距离信息。
在一个实施例中,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
在一个实施例中,所述初始部件检测模型、所述初始距离检测模型以及初始角度检测模型分别包括MobileNets模型。
在一个实施例中,所述以期望位姿拍摄包括:以指定拍摄角度进行全景拍摄、近景拍摄、中景拍摄等中的一种或多种,所述全景拍摄、中景拍摄和近景拍摄按拍摄距离从大到小的顺序进行划分。
根据本说明书实施例的第三方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如下方法:
获取摄像模块采集的图像;
识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
本说明书的实施例提供的技术方案可以包括以下有益效果:
本说明书实施例,通过获取摄像模块采集的图像,识别图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得包括拍摄距离信息和拍摄角度信息中一种或多种的位姿信息,基于将所获得的位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息,从而实现对摄像模块的位姿信息进行反馈,可以指导拍摄人员调整拍摄方式,提高车辆定损图像的拍摄质量。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本说明书的实施例,并与说明书一起用于解释本说明书的原理。
图1是本说明书根据一示例性实施例示出的一种拍摄车辆定损图像的应用场景图。
图2是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的方法的流程图。
图3A是本说明书根据一示例性实施例示出的另一种用于辅助车辆定损图像拍摄的方法的流程图。
图3B是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的应用实例。
图4是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的装置所在计算机设备的一种硬件结构图。
图5是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。
在本说明书使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本说明书可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
车辆保险定损,可以通过科学、系统的专业化检查、测试与勘测手段,对车辆碰撞与事故现场进行综合分析,运用车辆估损资料与维修数据,对车辆碰撞修复进行科学系统的估损定价。而车辆定损图像为车辆估损资料之一,车辆定损图像可以是用来对出险车辆进行定损核损的图像,往往通过作业人员或车主对现场进行拍照获得。为了能清楚地反应出车辆受损的具体部位、损伤部件、损伤类型、损伤程度等信息,对车辆定损图像的质量往往具有严格要求。而随着移动终端的迅速发展,用户可以随时利用具有拍摄功能的移动终端进行拍摄。如图1所示,是本说明书根据一示例性实施例示出的一种拍摄车辆定损图像的应用场景图。用户可以利用手机对受损车辆进行拍照,获得车辆定损图像。然而,由非专业人员对受损车辆进行拍摄,获得的车辆定损图像可能不符合定损图像处理要求。而不同拍摄质量的车辆定损图像将直接影响最终的定损结果,因此需要提供一种能提高车辆定损图像拍摄质量的处理方案。
本说明书实施例提供一种用于辅助车辆定损图像拍摄的方案,通过增加图像拍摄引导功能,可以指导拍摄人员调整拍摄方式,提高车辆定损图像的拍摄质量,为后续定损核损理赔流程提供准确的车辆定损图像,从而产生更准确的车辆定损结果。
以下结合附图对本说明书实施例进行示例说明。
如图2所示,是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的方法的流程图,所述方法包括:
在步骤202中,获取摄像模块采集的图像;
在步骤204中,识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
在步骤206中,基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
其中,所获取的图像可以是已存储的图像,例如,由拍照控件被触发而保存的图像;所获取的图像也可以是摄像模块当前采集但还未存储的图像。在一个实施例中,按预设频率对摄像模块采集的图像进行截帧处理,截取摄像模块所采集的图像。例如,预设频率可以是1秒2次。在另一个实施例中,也可以实时获取摄像模块所采集的图像,以便实时对用户当前的镜头拍摄画面进行检测,判断是否需要用户调整拍摄姿态。
本说明书实施例目的是利用当前获取的图像,判断摄像模块与目标车辆的相对位姿,即拍摄模块与目标车辆的相对关系。相对位姿可以是拍摄距离,也可以是拍摄角度,通过将所确定的拍摄距离或拍摄角度,与期望的拍摄距离或拍摄角度进行比较,并根据比较结果输出引导用户拍摄的提醒信息,从而实现对摄像模块的拍摄信息进行实时反馈,用于纠正拍摄方式不当导致图片质量差的问题。
为了提高位姿信息的准确率,本实施例可以先识别图像中目标车辆上的部件信息。部件可以是组成目标车辆的组成部分。部件信息可以是描述部件的信息,在一个实施例中,部件信息可以包括部件位置、部件大小和部件标识。部件位置可以是图像中部件的位置。部件大小可以是图像中部件的大小。部件标识可以是用于区分目标车辆上不同部件的标识,例如,部件标识可以是部件名编码。在一个实施例中,部件大小和部件位置可以通过部件矩形框进行标注,利用部件矩形框位置坐标信息表示部件大小和部件位置。在其他实施例中,部件信息还可以包括部件形状等信息,在此不一一赘述。
以下以一种部件信息识别方法为例进行示例说明。
所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。车辆 的部件大小、部件位置以及部件标识。
在模型训练阶段,本说明书实施例以样本图像作为样本特征,以样本图像中车辆部件的部件信息作为样本标签,构建第一训练样本数据,并利用训练样本数据对初始部件检测模型进行训练,获得预设的部件检测模型。进一步的,还可以将其他部件信息作为样本标签,例如,还可以将部件大小、部件受损状态、部件受损程度、部件受损大小、部件受损位置等部件相关信息中的一种或多种作为样本标签,从而可以利用部件检测模型预测出更多的部件信息,以达到更好的引导提升效果。
本实施例采用有监督学习的方式训练获得部件检测模型,在一个例子中,可以采用深度学习模型作为初始部件检测模型,特别是采用MobileNets模型构建初始部件检测模型,在确保准确率的情况下保证了模型计算效率,在移动端等低端机型中也能做到实时计算反馈,可提供较好的用户体验。
MobileNets是基于一个流线型的架构,使用深度可分离的卷积来构建轻量级的深层神经网络。通过引入的全局超参数,在延迟度和准确度之间有效地进行平衡。超参数允许模型构建者根据问题的约束条件,为其应用选择合适大小的模型。
因此,在获得部件检测模型后,可以利用部件检测模型预测图像中目标车辆的部件信息,提高获得部件信息的效率。
由于图像中目标车辆的部件的位置、大小和名称确定,则可以预测出拍摄装置与目标车辆的相对位姿,例如,拍摄距离和拍摄角度。
关于拍摄距离信息,可以是用于描述摄像模块与目标车辆间距离的信息。在一个实施例中,拍摄距离信息可以是距离值。在另一个实施例中,拍摄距离信息可以是摄像模块与目标车辆的距离所属距离范围。通过距离范围描述摄像模块与目标车辆的距离关系,以实现利用距离范围进行距离比较,可以提高比较效率,同时降低距离检测难度。在一个例子中,距离范围可以以数字范围体现。在另一个例子中,距离范围可以以距离等级体现,例如,将不同距离范围用远、中、近三个等级表示,可以对应全景、中景和近景。例如,当所确定的距离范围与期望的距离范围不同时,可以进行距离调整提醒。
在一个实施例中,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标 签包括拍摄距离信息。
在模型训练阶段,本说明书实施例以样本图像、样本图像中车辆部件的部件信息作为样本特征,以拍摄距离信息作为样本标签,构建第二训练样本数据。并利用第二训练样本数据对初始距离检测模型进行训练,获得预设的距离检测模型。该实施例不仅将样本图像作为样本特征,还将部件信息作为样本特征,可以提高距离检测模型的预测结果。
本实施例采用有监督学习的方式训练获得距离检测模型,在一个例子中,可以采用深度学习模型作为初始距离检测模型,特别是采用MobileNets模型构建初始距离检测模型。
在应用阶段,可以将部件检测模型的输出数据以及所述图像输入预设的距离检测模型,将距离检测模型的数据结果作为拍摄距离信息,实现利用距离检测模型预测拍摄距离信息,提高拍摄距离信息的准确性。
关于拍摄角度信息,可以是摄像模块的镜面与目标车辆的相对角度信息。在一个实施例中,拍摄角度信息可以是具体的拍摄角度值。在另一个实施例中,拍摄角度信息可以是拍摄角度所属角度范围。例如,可以以俯拍、斜拍、仰拍、正拍等表示摄角度信息。又如,可以以具体的范围值表示角度范围。
在一个实施例中,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
在模型训练阶段,本说明书实施例以样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息作为样本特征,以拍摄角度信息作为样本标签,构建第三训练样本数据。并利用第三训练样本数据对初始距离检测模型进行训练,获得预设的角度检测模型。该实施例不仅将样本图像作为样本特征,还将部件信息以及拍摄距离信息作为样本特征,可以提高角度检测模型的预测结果。
本实施例采用有监督学习的方式训练获得角度检测模型,在一个例子中,可以采用深度学习模型作为初始角度检测模型,特别是采用MobileNets模型构建初始角度检测模型。
在应用阶段,可以将部件检测模型的输出数据、距离检测模型的输出数据以及所述 图像输入训练获得的角度检测模型,将角度检测模型的预测结果作为拍摄角度信息,实现利用角度检测模型预测拍摄角度信息,提高拍摄角度信息的准确性。
在一个实施例中,核心全链路模型都采用移动端的深度学习模型,在确保准确率的情况下保证了模型计算效率,在中低端机型中也能做到实时计算反馈,可提供较好的用户体验。
可以理解的是,在其他实施例中,可以由一个模型估计摄像模块与目标车辆的拍摄距离信息和拍摄角度信息,在此不一一赘述。
在基于所获取的图像确定当前位姿信息后,可以将位姿信息与预设的期望位姿信息进行比较,并根据比较结果输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。期望位姿可以是满足期望位姿信息的情况下摄像模块与目标车辆间的相对关系。若拍摄距离不属于期望拍摄距离,则可以提醒用户控制摄像模块以期望拍摄距离拍摄目标车辆;若拍摄角度不属于期望拍摄角度,则可以提醒用户控制摄像模块以期望角度拍摄目标车辆等。例如,若拍摄距离大于期望拍摄距离,则可以提醒用户控制摄像模型靠近目标车辆。又如,若拍摄距离信息为“近”,而期望拍摄距离信息为“中”,则可以提醒用户控制摄像模型稍微远离目标车辆等。
在一个例子中,所述以期望位姿拍摄包括:近景拍摄、中景拍摄、以指定拍摄角度进行全景拍摄等中的一种或多种。全景拍摄所对应的拍摄距离大于中景拍摄所对应的拍摄距离,中景拍摄所对应的拍摄距离大于近景拍摄所对应的拍摄距离。指定拍摄角度可以是45度角等。在基于当前图片所确定的拍摄距离和拍摄角度,确定与期望拍摄信息不符时,输出相应的提醒信息。例如,在执行远景拍摄时,根据比较结果输出“请拍摄45度车辆全景照片,并确保能看到车牌”的提醒信息;又如,在执行近景拍摄时,根据比较结果输出“请靠近一些,拍摄车辆损失细节,让我看清损失程度”的提醒信息;又如,在执行中景拍摄时,根据比较结果输出“请后退两步,拍摄车辆损伤部位,让我看清损失概况”。
其中,提醒信息的输出方式,可以是语音播报,也可以是文字提醒,进一步的,还可以是图片提醒。例如,提供以期望位姿拍摄样本车辆的图像,以便利用图像直观对用户进行提醒。
为了引导用户拍摄符合要求的车辆定损图像,在一个实施例中,还可以在获取图像之前,输出以期望位姿拍摄目标车辆的指示指令。输出指示指令的目的是为了提醒用户 所需拍摄的图像的类型,以便获得多类型的车辆定损图像。例如,指示指令可以包括全景拍摄指令、中景拍摄指令、近景拍摄指令、指定角度拍摄指令等中的一种或多种。
可见,通过输出指示指令,可以实现用户在指示指令提醒下拍摄符合要求的多类型图像。
以上实施方式中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征的任意进行组合也属于本说明书公开的范围。
以下以其中一种组合进行示例说明。
如图3A所示,是本说明书根据一示例性实施例示出的另一种用于辅助车辆定损图像拍摄的方法的流程。在拍摄组件被打开后,可以进入拍照模式。在镜头移动捕获现实场景的过程中,可以输出以期望位姿拍摄目标车辆的指示指令,例如,指示指令包括全景拍摄指令、中景拍摄指令、近景拍摄指令、指定角度拍摄指令中的一种或多种。拍摄组件按预设频率进行截帧处理,针对每帧截取的图像,调用组件内置模型进行处理。将图像输入部件检测模型,以对图像进行部件识别获得部件信息。将部件检测模型的输出数据以及所述图像输入预设的距离检测模型,以获得拍摄距离信息。将部件检测模型的输出数据、距离检测模型的输出数据以及所述图像输入训练获得的角度检测模型,以获得拍摄角度信息。根据部件检测模型、距离检测模块和角度检测模型的输出结果,通过融合模型输出结论,拍摄组件根据规则输出用户提示,提示用户需要调整距离(调近或调远)和拍摄角度调整。在其他实施例中,也可以将采集的图像实时计算并产生提示反馈。
如图3B所示,是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的应用实例。在每种拍摄类型下,都可以提供示例图,用于提示用户在不知如何拍摄时可以根据示例图进行拍摄。图3B以全景中的实例图进行举例说明。在该实例中,要求用户拍摄三种类型的图像,一种是全景图像,一种是近景图像,一种是中景图像。针对不同的图像类型配置有相应的期望位姿信息。用户打开定损宝应用,进入车辆全景拍摄场景。在全景拍摄场景中,若基于当前截取图像检测获得的拍摄距离和拍摄角度不符合全景拍摄的期望位姿信息时,输出提醒信息:请拍摄45度车辆全景照片,并确保能看到车牌。检测到用户在提示指引下完成拍摄动作后,进入车辆中景拍摄场景。在中景拍摄场景中,若基于当前截取图像检测获得的拍摄距离小于中景拍摄的期望拍摄距离时,输出提醒信息:请后退两步,拍摄车辆损伤部位,让我看清损失概况。检测到用户 在提示指引下完成拍摄动作后,进入车辆近景拍摄场景。在进入近景拍摄场景中,若基于当前截取图像检测获得的拍摄距离大于近景拍摄的期望拍摄距离时,输出提醒信息:请靠近一些,拍摄车辆损失细节,让我看清损失程度。检测到用户在提示指引下完成拍摄动作后,可以将所拍摄图像进行展示,以供用户预览,并在递交控件被触控时执行定损图像提交操作。可以理解的是,还可以根据其他比较结果输出相应的提醒信息,在此不一一赘述。由上述实施例可见,根据当前拍摄的内容,利用移动端的AI模型计算能力,融合部件、距离、拍摄角度的计算结果输出准确的拍摄距离、拍摄角度等位置反馈信息,从而能指导用户调整拍摄方式,产出更优质的车辆定损图像。
与前述用于辅助车辆定损图像拍摄的方法的实施例相对应,本说明书还提供了用于辅助车辆定损图像拍摄的装置及其所应用的电子设备的实施例。
本说明书用于辅助车辆定损图像拍摄的装置的实施例可以应用在计算机设备。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在计算机设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图4所示,为本说明书用于辅助车辆定损图像拍摄的装置所在计算机设备的一种硬件结构图,除了图4所示的处理器410、网络接口420、内存430、以及非易失性存储器440之外,实施例中用于辅助车辆定损图像拍摄的装置431所在的计算机设备通常根据该设备的实际功能,还可以包括其他硬件,对此不再赘述。
如图5所示,是本说明书根据一示例性实施例示出的一种用于辅助车辆定损图像拍摄的装置的框图,所述装置包括:
图像获取模块52,用于:获取摄像模块采集的图像;
信息检测模块54,用于:识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
信息提醒模块56,用于:基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
在一个实施例中,所述部件信息包括:部件位置、部件大小和部件标识,和/或,所述拍摄距离信息为摄像模块与目标车辆的距离所属距离范围,和/或,所述拍摄角度信息为拍摄角度所属角度范围。
在一个实施例中,所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。
在一个实施例中,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标签包括拍摄距离信息。
在一个实施例中,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
在一个实施例中,所述初始部件检测模型、所述初始距离检测模型以及初始角度检测模型分别包括MobileNets模型。
在一个实施例中,所述以期望位姿拍摄包括:以指定拍摄角度进行全景拍摄、近景拍摄、中景拍摄等中的一种或多种,所述全景拍摄、中景拍摄和近景拍摄按拍摄距离从大到小的顺序进行划分。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
相应的,本说明书实施例还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如下方法:
获取摄像模块采集的图像;
识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
一种计算机存储介质,所述存储介质中存储有程序指令,所述程序指令包括:
获取摄像模块采集的图像;
识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本领域技术人员在考虑说明书及实践这里申请的发明后,将容易想到本说明书的其它实施方案。本说明书旨在涵盖本说明书的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书的一般性原理并包括本说明书未申请的本技术 领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书的真正范围和精神由下面的权利要求指出。
应当理解的是,本说明书并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书的范围仅由所附的权利要求来限制。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。

Claims (12)

  1. 一种用于辅助车辆定损图像拍摄的方法,所述方法包括:
    获取摄像模块采集的图像;
    识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
    基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
  2. 根据权利要求1所述的方法,所述部件信息包括:部件位置、部件大小和部件标识,和/或,所述拍摄距离信息为摄像模块与目标车辆的距离所属距离范围,和/或,所述拍摄角度信息为拍摄角度所属角度范围。
  3. 根据权利要求1或2所述的方法,所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。
  4. 根据权利要求3所述的方法,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标签包括拍摄距离信息。
  5. 根据权利要求4所述的方法,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
  6. 根据权利要求5所述的方法,所述初始部件检测模型、所述初始距离检测模型以及初始角度检测模型分别包括MobileNets模型。
  7. 根据权利要求1至6任一项所述的方法,所述以期望位姿拍摄包括:以指定拍摄角度进行全景拍摄、近景拍摄、中景拍摄等中的一种或多种,所述全景拍摄、中景拍摄和近景拍摄按拍摄距离从大到小的顺序进行划分。
  8. 一种用于辅助车辆定损图像拍摄的装置,所述装置包括:
    图像获取模块,用于:获取摄像模块采集的图像;
    信息检测模块,用于:识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
    信息提醒模块,用于:基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
  9. 根据权利要求8所述的装置,所述部件信息基于利用预设的部件检测模型对所述图像进行识别获得;所述部件检测模型基于利用第一训练样本数据对初始部件检测模型进行训练获得;在第一训练样本数据中,样本特征包括样本图像,样本标签包括样本图像中车辆部件的部件信息。
  10. 根据权利要求9所述的装置,所述拍摄距离信息基于:以所述部件检测模型的输出数据以及所述图像作为预设的距离检测模型的输入数据,并利用所述距离检测模型进行预测获得;所述距离检测模型基于利用第二训练样本数据对初始距离检测模型进行训练获得;在第二训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息,样本标签包括拍摄距离信息。
  11. 根据权利要求10所述的装置,所述拍摄角度信息基于:以所述部件检测模型的输出数据、距离检测模型的输出数据以及所述图像作为预设的角度检测模型的输入数据,并利用所述角度检测模型进行预测获得;所述角度检测模型基于利用第三训练样本数据对初始角度检测模型进行训练获得;在第三训练样本数据中,样本特征包括样本图像、样本图像中车辆部件的部件信息以及拍摄距离信息,样本标签包括拍摄角度信息。
  12. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如下方法:
    获取摄像模块采集的图像;
    识别所述图像中目标车辆的部件,并至少根据识别获得的部件信息检测摄像模块与目标车辆的相对位姿,获得位姿信息,所述位姿信息包括拍摄距离信息和拍摄角度信息中的一种或多种;
    基于将所述位姿信息与预设的期望位姿信息进行比较获得的比较结果,输出用于引导用户控制摄像模块以期望位姿拍摄目标车辆的提醒信息。
PCT/CN2019/096321 2018-08-31 2019-07-17 用于辅助车辆定损图像拍摄的方法、装置及设备 WO2020042800A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811013914.6 2018-08-31
CN201811013914.6A CN109325488A (zh) 2018-08-31 2018-08-31 用于辅助车辆定损图像拍摄的方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2020042800A1 true WO2020042800A1 (zh) 2020-03-05

Family

ID=65264238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096321 WO2020042800A1 (zh) 2018-08-31 2019-07-17 用于辅助车辆定损图像拍摄的方法、装置及设备

Country Status (3)

Country Link
CN (1) CN109325488A (zh)
TW (1) TWI710967B (zh)
WO (1) WO2020042800A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918049A (zh) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 三维成像的方法、装置、电子设备及存储介质
CN112492105A (zh) * 2020-11-26 2021-03-12 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN112633295A (zh) * 2020-12-22 2021-04-09 深圳集智数字科技有限公司 面向循环任务的预测方法、装置、电子设备及存储介质
CN113516013A (zh) * 2021-04-09 2021-10-19 阿波罗智联(北京)科技有限公司 目标检测方法、装置、电子设备、路侧设备和云控平台
CN114219806A (zh) * 2022-02-22 2022-03-22 成都数联云算科技有限公司 一种汽车雷达检测方法、装置、设备、介质及产品

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325488A (zh) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 用于辅助车辆定损图像拍摄的方法、装置及设备
CN110033386B (zh) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
CN110245552B (zh) * 2019-04-29 2023-07-18 创新先进技术有限公司 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110264444B (zh) * 2019-05-27 2020-07-17 阿里巴巴集团控股有限公司 基于弱分割的损伤检测方法及装置
US10783643B1 (en) 2019-05-27 2020-09-22 Alibaba Group Holding Limited Segmentation-based damage detection
CN110659567B (zh) * 2019-08-15 2023-01-10 创新先进技术有限公司 车辆损伤部位的识别方法以及装置
CN110659568B (zh) * 2019-08-15 2023-06-23 创新先进技术有限公司 验车方法及装置
CN110598562B (zh) * 2019-08-15 2023-03-07 创新先进技术有限公司 车辆图像采集引导方法以及装置
CN110660000A (zh) * 2019-09-09 2020-01-07 平安科技(深圳)有限公司 数据预测方法、装置、设备及计算机可读存储介质
CN110658731B (zh) * 2019-10-21 2020-07-03 珠海格力电器股份有限公司 一种智能家电配网方法、存储介质和智能终端
CN113038018B (zh) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 辅助用户拍摄车辆视频的方法及装置
CN110910628B (zh) * 2019-12-02 2021-02-12 支付宝(杭州)信息技术有限公司 车损图像拍摄的交互处理方法、装置、电子设备
CN111985448A (zh) * 2020-09-02 2020-11-24 深圳壹账通智能科技有限公司 车辆图像识别方法、装置、计算机设备及可读存储介质
CN112288800B (zh) * 2020-09-27 2023-05-12 山东浪潮科学研究院有限公司 一种服务器机柜门锁眼识别方法、设备及装置
CN112348686B (zh) * 2020-11-24 2021-07-13 德联易控科技(北京)有限公司 理赔图片的采集方法、装置及通讯设备
CN112364820A (zh) * 2020-11-27 2021-02-12 深源恒际科技有限公司 一种基于深度学习的车险核保验车图片采集方法及系统
CN113132632B (zh) * 2021-04-06 2022-08-19 蚂蚁胜信(上海)信息技术有限公司 一种针对宠物的辅助拍摄方法和装置
CN113627252A (zh) * 2021-07-07 2021-11-09 浙江吉利控股集团有限公司 一种车辆定损方法、装置、存储介质及电子设备
CN113840085A (zh) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 车源信息的采集方法、装置、电子设备及可读介质
CN117455466B (zh) * 2023-12-22 2024-03-08 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132251A1 (en) * 2015-02-16 2016-08-25 Coolbox S.R.L. Device and method for monitoring a vehicle, particularly for the management of loss events
CN106139564A (zh) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 图像处理方法和装置
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN106657755A (zh) * 2015-07-30 2017-05-10 中兴通讯股份有限公司 拍照的方法及装置
CN109325488A (zh) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 用于辅助车辆定损图像拍摄的方法、装置及设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9232143B2 (en) * 2013-09-18 2016-01-05 Wipro Limited Method and system for assistive photography
CN104407351B (zh) * 2014-12-05 2017-05-17 北京公科飞达交通工程发展有限公司 车辆窗口位置识别方法
CN104637342B (zh) * 2015-01-22 2017-01-04 江苏大学 一种狭小垂直车位场景智能识别与泊车路径规划系统及方法
CN105069451B (zh) * 2015-07-08 2018-05-25 北京智能综电信息技术有限责任公司 一种基于双目摄像头的车牌识别与定位方法
CN107229625A (zh) * 2016-03-23 2017-10-03 北京搜狗科技发展有限公司 一种拍摄处理方法和装置、一种用于拍摄处理的装置
TWI603271B (zh) * 2016-10-20 2017-10-21 元智大學 自動化電動機車識別與車體瑕疵檢測之方法及其系統
CN108240994A (zh) * 2016-12-24 2018-07-03 天津科寻科技有限公司 应用物联网传感器监测汽车底盘是否划伤的装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132251A1 (en) * 2015-02-16 2016-08-25 Coolbox S.R.L. Device and method for monitoring a vehicle, particularly for the management of loss events
CN106657755A (zh) * 2015-07-30 2017-05-10 中兴通讯股份有限公司 拍照的方法及装置
CN106139564A (zh) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 图像处理方法和装置
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN109325488A (zh) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 用于辅助车辆定损图像拍摄的方法、装置及设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111918049A (zh) * 2020-08-14 2020-11-10 广东申义实业投资有限公司 三维成像的方法、装置、电子设备及存储介质
CN111918049B (zh) * 2020-08-14 2022-09-06 广东申义实业投资有限公司 三维成像的方法、装置、电子设备及存储介质
CN112492105A (zh) * 2020-11-26 2021-03-12 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN112633295A (zh) * 2020-12-22 2021-04-09 深圳集智数字科技有限公司 面向循环任务的预测方法、装置、电子设备及存储介质
CN113516013A (zh) * 2021-04-09 2021-10-19 阿波罗智联(北京)科技有限公司 目标检测方法、装置、电子设备、路侧设备和云控平台
CN113516013B (zh) * 2021-04-09 2024-05-14 阿波罗智联(北京)科技有限公司 目标检测方法、装置、电子设备、路侧设备和云控平台
CN114219806A (zh) * 2022-02-22 2022-03-22 成都数联云算科技有限公司 一种汽车雷达检测方法、装置、设备、介质及产品
CN114219806B (zh) * 2022-02-22 2022-04-22 成都数联云算科技有限公司 一种汽车雷达检测方法、装置、设备、介质及产品

Also Published As

Publication number Publication date
TWI710967B (zh) 2020-11-21
TW202011254A (zh) 2020-03-16
CN109325488A (zh) 2019-02-12

Similar Documents

Publication Publication Date Title
WO2020042800A1 (zh) 用于辅助车辆定损图像拍摄的方法、装置及设备
TWI726194B (zh) 基於圖像的車輛定損方法、裝置及電子設備
TWI709091B (zh) 圖像處理方法和裝置
KR102418446B1 (ko) 픽쳐 기반의 차량 손해 평가 방법 및 장치, 및 전자 디바이스
EP3520045B1 (en) Image-based vehicle loss assessment method, apparatus, and system, and electronic device
EP3605386A1 (en) Method and apparatus for obtaining vehicle loss assessment image, server and terminal device
CN111160172B (zh) 泊车位检测方法、装置、计算机设备和存储介质
WO2021082662A1 (zh) 辅助用户拍摄车辆视频的方法及装置
CN109495686B (zh) 拍摄方法及设备
US11102413B2 (en) Camera area locking
US10291838B2 (en) Focusing point determining method and apparatus
CN110659397A (zh) 一种行为检测方法、装置、电子设备和存储介质
US10909388B2 (en) Population density determination from multi-camera sourced imagery
CN110910628B (zh) 车损图像拍摄的交互处理方法、装置、电子设备
US10313596B2 (en) Method and apparatus for correcting tilt of subject ocuured in photographing, mobile terminal, and storage medium
US20160360091A1 (en) Optimizing Capture Of Focus Stacks
CN114267041B (zh) 场景中对象的识别方法及装置
CN110705532A (zh) 一种识别翻拍图像的方法、装置及设备
CN111368944B (zh) 翻拍图像、证件照识别、模型训练方法、装置及电子设备
CN110766077A (zh) 证据链图像中特写图筛选方法、装置和设备
CN111368752B (zh) 车辆损伤的分析方法和装置
US11012613B2 (en) Flat surface detection in photographs
CN116980744B (zh) 基于特征的摄像头追踪方法、装置、电子设备及存储介质
JP2016213689A (ja) 撮像装置及びその制御方法
CN113486725A (zh) 智能车辆定损方法及装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19856394

Country of ref document: EP

Kind code of ref document: A1