WO2023273035A1 - 图像拍摄方法、图像分类模型训练方法、装置及电子设备 - Google Patents

图像拍摄方法、图像分类模型训练方法、装置及电子设备 Download PDF

Info

Publication number
WO2023273035A1
WO2023273035A1 PCT/CN2021/125788 CN2021125788W WO2023273035A1 WO 2023273035 A1 WO2023273035 A1 WO 2023273035A1 CN 2021125788 W CN2021125788 W CN 2021125788W WO 2023273035 A1 WO2023273035 A1 WO 2023273035A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
classification model
user
type
recognized
Prior art date
Application number
PCT/CN2021/125788
Other languages
English (en)
French (fr)
Inventor
缪石乾
Original Assignee
阿波罗智联(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿波罗智联(北京)科技有限公司 filed Critical 阿波罗智联(北京)科技有限公司
Publication of WO2023273035A1 publication Critical patent/WO2023273035A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision and machine learning.
  • the present disclosure provides an image capturing method, an image classification model training method, a device and electronic equipment.
  • an image capturing method including:
  • the image to be recognized is stored.
  • a method for training an image classification model including:
  • an image capture device including:
  • the first determination module is configured to determine the image to be recognized captured by the image acquisition device
  • the second determination module is configured to determine the type of the image to be recognized based on the pre-trained target image classification model
  • the storage module is configured to store the image to be recognized if the type of the image to be recognized is the type desired by the user.
  • an image classification model training device including:
  • the receiving module is configured to receive the uploaded image type that the user wants to store and the image corresponding to each image type;
  • the training module is configured to train the target image classification model based on the uploaded image types that the user wants to store and the images corresponding to each image type;
  • the sending module is configured to send the trained target image classification model.
  • an electronic device comprising:
  • a memory communicatively coupled to at least one of the processors; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the above method.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to execute the above method.
  • a computer program product comprising a computer program which, when executed by a processor, implements the above method.
  • FIG. 1 is a flowchart of an image capturing method provided according to the present disclosure
  • Fig. 2 is a schematic flow chart of an image classification model training method provided according to the present disclosure
  • FIG. 3 is a schematic structural diagram of an image capture device provided by the present disclosure.
  • FIG. 4 is a schematic structural diagram of an image classification model training device provided by the present disclosure.
  • FIG. 5 is a block diagram of an electronic device used to implement an embodiment of the present disclosure.
  • Figure 1 shows an image capture method provided by an embodiment of the present disclosure, specifically, it can be applied to a vehicle-mounted terminal, as shown in Figure 1, the method includes:
  • Step S101 determining the image to be recognized captured by the image acquisition device
  • the image collection device may be a driving recorder of the user's vehicle, wherein the driving recorder is connected to the vehicle-mounted equipment, or the image collection device may be a terminal device with an image recording function such as the user's mobile phone connected to the vehicle-mounted equipment; Wherein, if the user's vehicle is a self-driving vehicle, it may also be other visual sensors configured for the self-driving vehicle.
  • the image captured by the image acquisition device may be used as the image to be recognized, and stored by judging whether the image is an image type desired by the user.
  • Step S102 determining the type of the image to be recognized based on the pre-trained target image classification model
  • the type of the image to be recognized is determined by the target image classification model, wherein the target image classification model can be realized based on a deep neural network model, such as an image classification model based on AlexNet, LeNet, VGG, GoogLeNet, Residual Network and other networks , can also be other image classification models that can realize the functions of this application.
  • a deep neural network model such as an image classification model based on AlexNet, LeNet, VGG, GoogLeNet, Residual Network and other networks , can also be other image classification models that can realize the functions of this application.
  • the target image classification model may be trained locally, that is, the pre-trained target image classification model is trained locally based on the image types to be stored selected by the user and the images corresponding to each image type.
  • the target image classification model can also be trained on the server, and then sent to the user-side terminal device.
  • the target image classification model can be trained on the cloud server, and then sent to the vehicle terminal, wherein the target image classification model is based on The user-selected image types to be stored and the image training corresponding to each image type uploaded to the server by the user are obtained, that is, the model is trained according to the sample data selected by the user itself, which improves the personalization of the trained model and makes the target image The results of the classification model more closely match or are more consistent with the results expected by the user.
  • Step S103 if the type of the image to be recognized is the type that the user wants to store, then store the image to be recognized.
  • the probability corresponding to each image category is output, and the image type with the highest probability can be used as the image type of the picture to be recognized.
  • the image type is the type desired by the user
  • the The image to be recognized is stored.
  • the video segment associated with the stored image can be stored, so that an image with better quality (such as clear target object, complete shooting of the target object, and position of the target object in the image) can be determined from the associated video segment subsequently.
  • the position is the middle position, etc.), and the non-associated video segments are deleted, thereby reducing unnecessary memory usage.
  • all videos can be deleted and only the images to be stored are kept.
  • the adjacent frames in the video where the image to be recognized is located can be further identified to determine an image with better quality, such as in the recognition judgment image Whether the target object is completely captured, whether the target object in the image is in the middle of the image, etc.
  • the type of the captured image is determined based on the trained target image classification model, and whether it is the desired image is determined according to the determined type of the image, and if it is the image desired by the user, it is stored. Therefore, only when the type of the captured image is the type desired by the user, it is stored, which reduces the memory space occupied by the captured image; in addition, the target image classification model is based on the image type to be stored selected by the user and various The image corresponding to the image type is trained, thereby improving the relevance of the trained target image classification model and the user, and the classification result determined based on the target image classification model has a higher matching degree with the result expected by the user.
  • An embodiment of the present disclosure provides a possible implementation, wherein the pre-trained target image classification model is obtained based on the image types to be stored selected by the user and the image training corresponding to each image type, including:
  • the pre-training in this disclosure refers to building a network model to complete a specific image classification task. First, initialize the parameters randomly, then start training the network, and keep adjusting until the loss of the network becomes smaller and smaller. During the training process, the initialized parameters will continue to change. When the result meets the predetermined requirements, the parameters of the training model can be Save it so that the trained model can get better results the next time it performs a similar task. This process is pre-training.
  • Fine tuning Model fine-tuning (fine tuning), that is, use other people's parameters, modified network and own data to train, so that the parameters adapt to your own data, such a process is usually called fine tuning (fine tuning).
  • the fine-tuning of the model is an example: if CNN has made great progress in the field of image recognition, if you want to apply CNN to the user's own dataset, you will usually face a problem: usually the user's own dataset will not It is very large, and there are only dozens or dozens of pictures of each type. At this time, the idea of directly applying these data to train a network is not feasible, because a key factor for the success of deep learning is the training set composed of a large number of labeled data. . If you only use very little data at hand, even if you use a very good network structure, you will not be able to achieve high performance results.
  • pre-training refers to a pre-trained model or the process of pre-training the model
  • fine-tuning refers to the process of applying the pre-trained model to its own data set and adapting the parameters to its own data set.
  • the classification results of the image classification model after training are the same as The classification results expected by users will be more matched, so that for multiple users, the personalization of the trained image classification model can be improved to meet the needs of different users.
  • determining the image to be recognized captured by the image acquisition device includes:
  • Step S1011 (not shown in the figure), acquiring the video to be identified captured by the image acquisition device;
  • step S1012 the image to be recognized is determined by a clustering algorithm based on the obtained video to be recognized.
  • the video taken by the image acquisition device can be obtained, and then relevant video frames can be extracted from the video as the image to be recognized.
  • a representative frame can be determined from the video frame as the image to be recognized based on a clustering algorithm, so that The processing load of the image to be recognized for subsequent recognition can be reduced.
  • the image to be recognized can be determined from the video to be recognized by a clustering algorithm, such as unsupervised clustering, k-means clustering, etc.; where, if it is k-means clustering, it can be based on the duration of the video combined with the vehicle
  • the driving speed determines the k value. Specifically, at the same vehicle speed, the longer the video, the larger the k value; under the same video duration, the faster the vehicle travels, the larger the k value, and the smaller the speed, the smaller the k value.
  • K-Means is one of the iterative dynamic clustering algorithms, where K represents the number of categories and Means represents the mean value.
  • K-Means is an algorithm for clustering data points through the mean value.
  • the K-Means algorithm divides similar data points through the preset K value and the initial centroid of each category, and divides them by the mean value after division. Iterative optimization obtains the optimal clustering result.
  • the video segments associated with the stored image can be saved, and the non-associated video segments can be deleted, wherein the associated video segments can be video frames belonging to a classification cluster (that is, video frames corresponding to the same k value) the corresponding video segment.
  • a classification cluster that is, video frames corresponding to the same k value
  • the clustering algorithm is the k-meas algorithm
  • its k value is determined based on the video duration and the current vehicle speed, thereby ensuring that a considerable number of images to be recognized can be determined, avoiding missing images that the user wants, and avoiding certain Too many images to be recognized will increase the amount of subsequent data processing.
  • An embodiment of the present disclosure provides a possible implementation, wherein storing the image to be recognized includes:
  • the images to be recognized are classified and stored based on the types of the images to be recognized.
  • the images to be recognized are classified and stored according to the type of the images to be recognized, so as to facilitate the user to find related images.
  • the improvement is improved. Image lookup efficiency.
  • a method for training an image classification model is provided, wherein the server may be deployed centrally or in a distributed manner, as shown in FIG. 3 , including:
  • Step S201 receiving the uploaded image types to be stored by the user and images corresponding to each image type
  • the user may select the image type to be stored from predetermined image types displayed on the application display interface through the application display interface of the vehicle terminal, and determine a certain number of images for each image type to be stored and upload them to the server.
  • Step S202 based on the uploaded image types that the user wants to store and the images corresponding to each image type, train the target image classification model;
  • supervised learning can be performed according to the uploaded image types that the user wants to store and the images corresponding to each image type, and the target image classification model can be obtained through training.
  • Step S203 sending the trained target image classification model.
  • the trained target image classification model may be sent to the user-side terminal device.
  • the user-side terminal device is used to determine the type of the image to be recognized based on the target image classification model, and is used to store the image to be recognized if the type of the image to be recognized is the type that the user wants to store.
  • the target image classification model is trained according to the uploaded image types that the user wants to store and the images corresponding to each image type, thereby improving the personalization of the trained target classification model.
  • the embodiment of the present application provides a possible implementation, wherein, based on the image types uploaded by the user to be stored and the images corresponding to each image type, the target image classification model is trained, including:
  • Step S2021 (not shown in the figure), based on the received uploaded image type to be stored by the user, determine the pre-trained image classification model;
  • part of the image classification model can be pre-trained through the pre-training process, such as the image classification model X that can be classified into A, B, C, and D, and the image that can be classified into A, B, D, and E Classification model Y.
  • A, B, C, and E are the image types pre-stored by the user
  • D is other types that the user does not want to store.
  • the image classification model X is used as the target pre-trained image classification model.
  • Step S2022 fine-tuning the pre-trained image classification model based on the uploaded image types to be stored by the user and images corresponding to each image type to obtain the target image classification model.
  • the user can also select some unwanted pictures as type D for training, thereby further improving the relevance of the trained image classification model to the user and improving the personalization of the model; in addition, based on some unwanted pictures
  • the picture is used as type D for training, so as to avoid identifying the image type that the user does not want as the desired type (that is, to avoid being recognized as image types A, B, and C), and to avoid storing a large number of unwanted images and occupying memory space.
  • pre-training and fine-tuning in the second embodiment are the same as those in the first embodiment, and will not be repeated here.
  • Embodiments 1 and 2 Compared with the solutions provided by Embodiments 1 and 2, compared with the prior art, users need to stop or drive distractedly to take photos along the way, there are safety problems, or users need to look through the video captured by the image acquisition device to find the desired picture, and improve the efficiency.
  • the disclosure determines the image to be recognized captured by the image acquisition device; determines the type of the image to be recognized based on the pre-trained target image classification model; if the type of the image to be recognized is the type that the user wants to store, then stores the image to be recognized.
  • the target image classification model is based on the image type to be stored selected by the user and the image training corresponding to each image type. , thereby improving the correlation between the trained target image classification model and the user, and the classification result determined based on the target image classification model has a higher matching degree with the result expected by the user.
  • An embodiment of the present disclosure provides an image capturing device, as shown in FIG. 3 , the device 30 includes:
  • the first determination module 301 is configured to determine the image to be recognized captured by the image acquisition device
  • the second determination module 302 is configured to determine the type of the image to be recognized based on the pre-trained target image classification model
  • the storage module 303 is configured to store the image to be recognized if the type of the image to be recognized is the type desired by the user.
  • the embodiment of the present application provides a possible implementation, wherein the pre-trained target image classification model is trained based on the image types to be stored selected by the user and the images corresponding to each image type.
  • the embodiment of the present application provides a possible implementation, wherein the pre-trained target image classification model is obtained based on the image type to be stored selected by the user and the image training corresponding to each image type, including:
  • the pre-trained image classification model is fine-tuned based on the image types to be stored selected by the user and the images corresponding to each image type to obtain a target image classification model.
  • the first determination module 301 includes:
  • Acquisition unit 3011 (not shown in the figure), configured to acquire the video to be identified captured by the image acquisition device;
  • the first determination unit 3012 (not shown in the figure) is configured to determine the image to be recognized by using a clustering algorithm based on the acquired video to be recognized.
  • the clustering algorithm is a k-means clustering algorithm; wherein the k value is determined based on the duration of the video to be identified and the driving speed of the user when the video to be identified is taken.
  • the embodiment of the present application provides a possible implementation manner, wherein the storage module 303 is specifically configured to classify and store the images to be recognized based on the types of the images to be recognized.
  • An embodiment of the present disclosure provides an image classification model training device, the device 40 includes:
  • the receiving module 401 is configured to receive the image types uploaded by the user to be stored and the images corresponding to each image type;
  • the training module 402 is configured to train the target image classification model based on the uploaded image types that the user wants to store and the images corresponding to each image type;
  • the training module 402 includes:
  • the second determination unit 4021 (not shown in the figure) is configured to determine a pre-trained image classification model based on the received image type uploaded by the user to be stored;
  • the fine-tuning unit 4022 (not shown in the figure) is configured to fine-tune the pre-trained image classification model based on the uploaded image types to be stored by the user and images corresponding to each image type to obtain the target image classification model.
  • the acquisition, storage and application of the user's personal information involved are in compliance with relevant laws and regulations, and do not violate public order and good customs.
  • each module in the devices in Figs. 3-4 may be fully or partially implemented by software, hardware or a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer in the form of hardware, and can also be stored in the memory of the computer in the form of software, so that the processor can call and execute the corresponding operations of the above modules.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • the electronic device includes: at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor.
  • the disclosure determines the image to be recognized captured by the image acquisition device; determines the type of the image to be recognized based on the pre-trained target image classification model; if the type of the image to be recognized is the type that the user wants to store, then stores the image to be recognized.
  • the target image classification model is based on the image type to be stored selected by the user and the image training corresponding to each image type. , thereby improving the correlation between the trained target image classification model and the user, and the classification result determined based on the target image classification model has a higher matching degree with the result expected by the user.
  • the readable storage medium is a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method provided by the embodiments of the present disclosure.
  • the disclosure determines the image to be recognized captured by the image acquisition device; determines the type of the image to be recognized based on the pre-trained target image classification model; if the type of the image to be recognized is the type that the user wants to store, then stores the image to be recognized.
  • the target image classification model is based on the image type to be stored selected by the user and the image training corresponding to each image type. , thereby improving the correlation between the trained target image classification model and the user, and the classification result determined based on the target image classification model has a higher matching degree with the result expected by the user.
  • the computer program product comprises a computer program which, when executed by a processor, implements the method as shown in the first aspect of the present disclosure.
  • the disclosure determines the image to be recognized captured by the image acquisition device; determines the type of the image to be recognized based on the pre-trained target image classification model; if the type of the image to be recognized is the type that the user wants to store, then stores the image to be recognized.
  • the target image classification model is based on the image type to be stored selected by the user and the image training corresponding to each image type. , thereby improving the correlation between the trained target image classification model and the user, and the classification result determined based on the target image classification model has a higher matching degree with the result expected by the user.
  • FIG. 5 shows a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 500 includes a computing unit 501 that can execute according to a computer program stored in a read-only memory (ROM) 502 or loaded from a storage unit 508 into a random-access memory (RAM) 503. Various appropriate actions and treatments. In the RAM 503, various programs and data necessary for the operation of the device 500 can also be stored.
  • the computing unit 501, ROM 502, and RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504 .
  • the I/O interface 505 includes: an input unit 506, such as a keyboard, a mouse, etc.; an output unit 507, such as various types of displays, speakers, etc.; a storage unit 508, such as a magnetic disk, an optical disk, etc. ; and a communication unit 509, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 509 allows the device 500 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 501 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 501 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the calculation unit 501 executes various methods and processes described above, such as an image capturing method or an image classification model training method. For example, in some embodiments, an image capture method or an image classification model training method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508.
  • part or all of the computer program may be loaded and/or installed on the device 500 via the ROM 502 and/or the communication unit 509.
  • the computer program When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the above-described image capture method or image classification model training method can be performed.
  • the computing unit 501 may be configured in any other appropriate way (for example, by means of firmware) to execute an image capturing method or an image classification model training method.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium can be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

提供了一种图像拍摄方法、图像分类模型训练方法、装置及电子设备。该图像拍摄方法包括:确定图像采集装置拍摄得到的待识别图像(S101);基于训练的目标图像分类模型确定拍摄的图像的类型(S102);如果所述待识别图像的类型为用户想要的类型,则将所述待识别图像进行存储(S103)。该图像分类模型训练方法包括:接收上传的用户欲存储的图像类型以及各图像类型对应的图像(S201);基于基于所述上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型(S202);发送训练得到的所述目标图像分类模型(S203)。

Description

图像拍摄方法、图像分类模型训练方法、装置及电子设备 技术领域
本公开涉及人工智能技术领域,尤其涉及计算机视觉、机器学习技术领域。
背景技术
随着智能汽车的普及,人们与汽车结合的场景越来越多,人们对生活场景的拍照记录也变得越来越普遍,如何在开车的时候能够拍到沿途的照片,如风景照、人文照、豪车等,也逐渐演变人们对车生活的一种新需求。
发明内容
本公开提供了一种图像拍摄方法、图片分类模型训练方法、装置及电子设备。
根据本公开的第一方面,提供了一种图像拍摄方法,包括:
确定图像采集装置拍摄得到的待识别图像;
基于预训练的目标图像分类模型确定待识别图像的类型;
如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。
根据本公开的第二方面,提供了一种图像分类模型训练方法,包括:
接收上传的用户欲存储的图像类型以及各图像类型对应的图像;
基于上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型;
发送训练得到的目标图像分类模型。
根据本公开的第三方面,提供了一种图像拍摄装置,包括:
第一确定模块,被配置为确定图像采集装置拍摄得到的待识别图像;
第二确定模块,被配置为基于预训练的目标图像分类模型确定待识别图像的类型;
存储模块,被配置为如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。
根据本公开的第四方面,提供了一种图像分类模型训练装置,包括:
接收模块,被配置为接收上传的用户欲存储的图像类型以及各图像类型对应的图像;
训练模块,被配置为基于上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型;
发送模块,被配置为发送训练得到的目标图像分类模型。
根据本公开的第五方面,提供了一种电子设备,该电子设备包括:
至少一个处理器;以及
与上述至少一个处理器通信连接的存储器;其中,
存储器存储有可被上述至少一个处理器执行的指令,指令被上述至少一个处理器执行,以使上述至少一个处理器能够执行上述方法。
根据本公开的第六方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,该计算机指令用于使计算机执行上述方法。
根据本公开的第七方面,提供了一种计算机程序产品,包括计算机程序,该计算机程序在被处理器执行时实现上述方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。本发明的其它特征、目的和优点从说明书、附图以及权利要求书中可以得出。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开提供的图像拍摄方法流程图;
图2是根据本公开提供的图像分类模型训练方法流程示意图;
图3本公开提供的图像拍摄装置的结构示意图;
图4本公开提供的图像分类模型训练装置的结构示意图;
图5是用来实现本公开实施例的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
实施例一
图1示出了本公开实施例提供的一种图像拍摄方法,具体地,可以应用于车载终端,如图1所示,该方法包括:
步骤S101,确定图像采集装置拍摄得到的待识别图像;
具体地,图像采集装置可以是用户车辆的行车记录仪,其中,该行车记录仪与车载设备相连,或者图像采集装置可以是与车载设备连接后的用户的手机等具有图像摄制功能的终端设备;其中,如果用户的车辆为自动驾驶车辆,还可以是自动驾驶车辆配置的其他视觉传感器。
具体地,可以将图像采集装置拍摄得到的图像作为待识别图像,通过判断该图像是否为用户想要的图像类型而进行存储。
步骤S102,基于预训练的目标图像分类模型确定待识别图像的类型;
具体地,通过目标图像分类模型确定待识别图像的类型,其中,该目标图像分类模型可以是基于深度神经网络模型实现的,如基于AlexNet、LeNet、 VGG、GoogLeNet、Residual Network等网络的图像分类模型,也可以是其他能够实现本申请功能的图像分类模型。
其中,目标图像分类模型可以是在本地训练完成的,即预训练的目标图像分类模型基于用户选定的欲存储的图像类型以及各图像类型对应的图像在本地述训练得到。目标图像分类模型也可以是在服务器训练完成的,然后发送用户侧终端设备,具体地,该目标图像分类模型可以是在云端服务器训练好后,再发送至车载终端,其中,目标图像分类模型基于用户上传至服务器的用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,即根据用户自身选定的样本数据进行模型的训练,提升了训练的模型的个性化,使得目标图像分类模型的结果与用户预期的结果匹配度更高或更一致。
步骤S103,如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。
具体地,将待识别图片输入至目标图像分类模型后,输出各个图像类别对应的概率,可以将概率最高的作为待识别图片的图像类型,如果该图像类型是用户想要的类型,则将该待识别图像进行存储。与此同时可以将与该存储的图像关联的视频段存储,从而后续可以从关联的视频段中确定出一质量较好的图像(如目标对象清晰、目标对象拍摄完整、目标对象在图像中的位置为中间位置等),非关联的视频段删除,从而减少了非必要的内存占用。为进一步减少占用的内存,可以删除所有视频,仅保留欲存储的图像。
具体地,如果分类结果为待识别图像的类型为用户欲存储的类型,则可以进一步对待识别图像所在视频中的邻近帧进行识别,从而确定出一张质量较好的图像,如识别判断图像中的目标对象是否拍摄完整,图像中的目标对象是否在图像的中间位置等。
现有的车主在行车时通过在车上用固定支架用手机来实现拍照,而使车主不能专心于驾驶,影响驾驶安全。本公开实施,在车主驾驶车辆时能够自 动抓拍图像,并将车主想要的图像类型的图像自动存储,实现在安全驾驶的情况下,还能自动获取到满意的拍照图片。
本公开实施例提供的方案,基于训练的目标图像分类模型确定拍摄的图像的类型,以及根据确定的图像的类型确定是否为想要的图像,当为用户想要的图像时,则进行存储,从而仅当拍摄的图像的类型为用户想要的类型时,才进行存储,减少了拍摄的图像所占用的内存空间;此外,目标图像分类模型是基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,从而提升了训练的目标图像分类模型与用户的关联性,基于该目标图像分类模型确定的分类结果与用户所期待的结果的匹配度更高。
本公开实施例提供了一种可能的实现方式,其中,基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到所述预训练的目标图像分类模型,包括:
基于用户欲存储的图像类型,确定预训练的图像分类模型;
基于用户欲存储的图像类型以及各图像类型对应的图像对预训练的图像分类模型进行微调,得到目标图像分类模型。
其中,本公开中的预训练,即搭建一个网络模型来完成一个特定的图像分类的任务。首先,随机初始化参数,然后开始训练网络,不断调整直到网络的损失越来越小,在训练的过程中,初始化的参数会不断变化,当结果符合预定的要求时,就可以将训练模型的参数保存下来,以便训练好的模型可以在下次执行类似任务时获得较好的结果,这个过程就是pre-training(预训练)。
模型微调(fine tuning),即用别人的参数、修改后的网络和自己的数据进行训练,使得参数适应自己的数据,这样一个过程,通常称之为微调(fine tuning).。
模型的微调举例说明:如CNN在图像识别这一领域取得了巨大的进步, 如果想将CNN应用到用户自身的数据集上,这时通常就会面临一个问题:通常用户自身的dataset都不会特别大,每一类图片只有几十或者十几张,这时候,直接应用这些数据训练一个网络的想法就不可行了,因为深度学习成功的一个关键性因素就是大量带标签数据组成的训练集。如果只利用手头上很少的数据,即使利用非常好的网络结构,也达不到很高的表现结果。fine-tuning的思想就可以很好解决这一问题,通过对ImageNet上训练出来的模型(如CaffeNet,VGGNet,ResNet)进行微调,然后应用到用户自己的数据集上。所以,预训练就是指预先训练的一个模型或者指预先训练模型的过程;微调就是指将预训练过的模型作用于自己的数据集,并使参数适应自己数据集的过程。
对于本公开实施例,通过预训练-微调,除了能提升模型的训练效率之外,由于是基于用户自身选定的分类及各分类对应的图片,所以训练完成后的图像分类模型的分类结果与用户期待的分类结果会更匹配,从而对于多个用户来说,提升了训练的图像分类模型的个性化,能够满足不同用户的需求。
本公开实施例提供了一种可能的实现方式,其中,确定图像采集装置拍摄得到的待识别图像,包括:
步骤S1011(图中未示出),获取图像采集装置拍摄的待识别视频;
步骤S1012(图中未示出),基于获取的待识别视频通过聚类算法确定待识别图像。
具体地,可以获取图像采集装置拍摄的视频,然后在视频中提取出相关视频帧作为待识别图像,具体地,可以基于通过聚类算法,从视频帧中确定出代表帧作为待识别图像,从而能减少后续识别的待识别图像的处理量。
具体地,可以是通过聚类算法从待识别视频中确定出待识别图像,如无监督聚类,k-means聚类等;其中,如果为k-means聚类,可以基于视频的时长结合车辆的行驶速度确定k值,具体,相同车辆速度下,视频越长k值 越大;相同视频时长下,车辆行驶速度越快,k值越大,速度越小,k值越小。
其中,聚类的基本思想是,先把视频聚成n个类,这n个类内的视频帧是相似的,而类与类之间的视频帧是不相似的。第二步是从每个类内提取一个代表作为关键帧,另外,如果一个类的帧数太少,那么这个类不具有代表性,可以直接与相邻帧合并。其中,K-Means是迭代动态聚类算法中的一种,其中K表示类别数,Means表示均值。顾名思义K-Means是一种通过均值对数据点进行聚类的算法,K-Means算法通过预先设定的K值及每个类别的初始质心对相似的数据点进行划分,并通过划分后的均值迭代优化获得最优的聚类结果。
具体地,可以将与存储的图像关联的视频段进行保存,非关联的视频段进行删除,其中,关联的视频段可以是属于一个分类簇(即对应同一个k值的视频帧)的视频帧对应的视频段。
本申请实施例,如果聚类算法为k-meas算法,其k值基于视频时长与当前车速确定,从而能保证确定出相当数量的待识别图像,避免遗漏用户想要的图像,以及避免确定的待识别图像过多,增加后续的数据处理量。
本公开实施例提供了一种可能的实现方式,其中,将待识别图像进行存储,包括:
基于待识别图像的类型将待识别图像分类存储。
对于本申请实施例,根据待识别图像的类型将待识别图像分类存储,从而方便用户查找相关图像,与现有技术需要用户逐帧观看拍摄的视频才能查找到想要的图像相比,提升了图像的查找效率。
实施例二
根据本公开的第二方面,提供了一种图像分类模型训练方法,其中,该服务器可以是中心部署的,也可以是采用分布式方式部署的,如图3所示,包括:
步骤S201,接收上传的用户欲存储的图像类型以及各图像类型对应的图像;
具体地,可以是用户通过车载终端的应用显示界面,从应用显示界面显示的预定的图像类型中选择欲存储的图像类型,以及针对各欲存储的图像类型分别确定一定数量的图像上传至服务器。
步骤S202,基于上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型;
具体地,可以根据上传的用户欲存储的图像类型以及各图像类型对应的图像,进行有监督学习,训练得到目标图像分类模型。
步骤S203,发送训练得到的目标图像分类模型。
具体地,可以将训练得到的目标图像分类模型发送至用户侧终端设备。用户侧终端设备用于基于目标图像分类模型确定待识别图像的类型,以及用于如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。
本申请实施例,根据上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型,从而提升了训练的目标分类模型的个性化。
本申请实施例提供了一种可能的实现方式,其中,基于上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型,包括:
步骤S2021(图中未示出),基于接收到的上传的用户欲存储的图像类型,确定预训练的图像分类模型;
具体地,可以是通过预训练过程,预先预训练了部分图像分类模型,如能实现分类为A、B、C、D的图像分类模型X,能实现分类为A、B、D、E的图像分类模型Y。其中,A、B、C、E为用户预存储的图像类型,D为其他非用户想存储的类型。
示例性地,如果用户上传的与欲存储的图像类型为A、B、C,则将图像分类模型X作为目标预训练的图像分类模型。
步骤S2022(图中未示出),基于上传的用户欲存储的图像类型以及各图像类型对应的图像对预训练的图像分类模型进行微调,得到目标图像分类模型。
接上述示例,进一步地,用户还可以选择一些不想要的图片作为类型D供训练使用,从而进一步提升训练的图像分类模型与用户的关联性,提升模型的个性化;此外,基于一些不想要的图片作为类型D供训练使用,从而避免将用户不想要的图像类型识别为想要的类型(即避免识别为图像类型A、B、C),避免存储大量不想要的图像,占用内存空间。
需要说明的是,实施例二中的预训练与微调与实施例一相同,此处不再赘述。
实施例一和二提供的方案,与现有技术用户需要停车或者分心驾驶才能拍摄沿途照片,存在安全问题,或者用户需要翻看图像采集设备拍摄的视频才能找到想要的图片相比,提高了效率。本公开通过确定图像采集装置拍摄得到的待识别图像;基于预训练的目标图像分类模型确定待识别图像的类型;如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。即基于训练的目标图像分类模型确定拍摄的图像的类型,以及根据确定的图像的类型确定是否为想要的图像,当为用户想要的图像时,则进行存储,从而仅当拍摄的图像的类型为用户想要的类型时,才进行存储,减少了拍摄的图像所占用的内存空间;此外,目标图像分类模型是基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,从而提升了训练的目标图像分类模型与用户的关联性,基于该目标图像分类模型确定的分类结果与用户所期待的结果的匹配度更高。
应该理解的是,虽然图1-2的流程图中的各个步骤按照箭头的指示依次 显示但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
实施例三
本公开实施例提供了一种图像拍摄装置,如图3所示,该装置30包括:
第一确定模块301,被配置为确定图像采集装置拍摄得到的待识别图像;
第二确定模块302,被配置为基于预训练的目标图像分类模型确定待识别图像的类型;
存储模块303,被配置为如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。
本申请实施例提供了一种可能的实现方式,其中,所述预训练的目标图像分类模型基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到。
本申请实施例提供了一种可能的实现方式,其中,基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到所述预训练的目标图像分类模型,包括:
基于所述用户选定的欲存储的图像类型,确定预训练的图像分类模型;
基于用户选定的欲存储的图像类型以及各图像类型对应的图像对所述预训练的图像分类模型进行微调,得到目标图像分类模型。
本申请实施例提供了一种可能的实现方式,其中,第一确定模块301包括:
获取单元3011(图中未示出),被配置为获取图像采集装置拍摄的待识 别视频;
第一确定单元3012(图中未示出),被配置为基于获取的待识别视频通过聚类算法确定待识别图像。
本申请实施例提供了一种可能的实现方式,其中,聚类算法为k-means聚类算法;其中,k值基于待识别视频的时长与拍摄待识别视频时用户的行驶车速确定。
本申请实施例提供了一种可能的实现方式,其中,存储模块303,具体用于基于待识别图像的类型将待识别图像分类存储。
对于本申请实施例,其实现的有益效果同上述方法实施例,此处不再赘述。
实施例四
本公开实施例提供了一种图像分类模型训练装置,该装置40包括:
接收模块401,被配置为接收上传的用户欲存储的图像类型以及各图像类型对应的图像;
训练模块402,被配置为基于上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型;
发送模块403,发送训练得到的目标图像分类模型。
本申请实施例提供了一种可能的实现方式,其中,训练模块402包括:
第二确定单元4021(图中未示出),被配置为基于接收到的上传的用户欲存储的图像类型,确定预训练的图像分类模型;
微调单元4022(图中未示出),被配置为基于上传的用户欲存储的图像类型以及各图像类型对应的图像对预训练的图像分类模型进行微调,得到目标图像分类模型。
对于本申请实施例,其实现的有益效果同上述方法实施例,此处不再赘述。
本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。
应该理解的是,图3-4的装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。述各模块可以硬件形式内嵌于或独立于计算机中的处理器中,也可以以软件形式存储于计算机中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
该电子设备包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行如本公开实施例提供的方法。
该电子设备与现有技术用户需要停车或者分心驾驶才能拍摄沿途照片,存在安全问题,或者用户需要翻看图像采集设备拍摄的视频才能找到想要的图片相比,提高了效率。本公开通过确定图像采集装置拍摄得到的待识别图像;基于预训练的目标图像分类模型确定待识别图像的类型;如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。即基于训练的目标图像分类模型确定拍摄的图像的类型,以及根据确定的图像的类型确定是否为想要的图像,当为用户想要的图像时,则进行存储,从而仅当拍摄的图像的类型为用户想要的类型时,才进行存储,减少了拍摄的图像所占用的内存空间;此外,目标图像分类模型是基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,从而提升了训练的目标图像分类模型与用户的关联性,基于该目标图像分类模型确定的分类结果与用户所期待的结果的匹配度更高。
该可读存储介质为存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机执行如本公开实施例提供的方法。
该可读存储介质与现有技术用户需要停车或者分心驾驶才能拍摄沿途照片,存在安全问题,或者用户需要翻看图像采集设备拍摄的视频才能找到想要的图片相比,提高了效率。本公开通过确定图像采集装置拍摄得到的待识别图像;基于预训练的目标图像分类模型确定待识别图像的类型;如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。即基于训练的目标图像分类模型确定拍摄的图像的类型,以及根据确定的图像的类型确定是否为想要的图像,当为用户想要的图像时,则进行存储,从而仅当拍摄的图像的类型为用户想要的类型时,才进行存储,减少了拍摄的图像所占用的内存空间;此外,目标图像分类模型是基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,从而提升了训练的目标图像分类模型与用户的关联性,基于该目标图像分类模型确定的分类结果与用户所期待的结果的匹配度更高。
该计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现如本公开的第一方面中所示的方法。
该计算机程序产品与现有技术用户需要停车或者分心驾驶才能拍摄沿途照片,存在安全问题,或者用户需要翻看图像采集设备拍摄的视频才能找到想要的图片相比,提高了效率。本公开通过确定图像采集装置拍摄得到的待识别图像;基于预训练的目标图像分类模型确定待识别图像的类型;如果待识别图像的类型为用户欲存储的类型,则将待识别图像进行存储。即基于训练的目标图像分类模型确定拍摄的图像的类型,以及根据确定的图像的类型确定是否为想要的图像,当为用户想要的图像时,则进行存储,从而仅当拍摄的图像的类型为用户想要的类型时,才进行存储,减少了拍摄的图像所占用的内存空间;此外,目标图像分类模型是基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到,从而提升了训练的目标图像分类模型与用户的关联性,基于该目标图像分类模型确定的分类结果与用户所期 待的结果的匹配度更高。
图5示出了可以用来实施本公开的实施例的示例电子设备500的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图5所示,设备500包括计算单元501,其可以根据存储在只读存储器(ROM)502中的计算机程序或者从存储单元508加载到随机访问存储器(RAM)503中的计算机程序,来执行各种适当的动作和处理。在RAM 503中,还可存储设备500操作所需的各种程序和数据。计算单元501、ROM502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
设备500中的多个部件连接至I/O接口505,包括:输入单元506,例如键盘、鼠标等;输出单元507,例如各种类型的显示器、扬声器等;存储单元508,例如磁盘、光盘等;以及通信单元509,例如网卡、调制解调器、无线通信收发机等。通信单元509允许设备500通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元501可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元501的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元501执行上文所描述的各个方法和处理,例如图像拍摄方法或图像分类模型训练方法。例如,在一些实施例中,图像拍摄方 法或图像分类模型训练方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元508。在一些实施例中,计算机程序的部分或者全部可以经由ROM 502和/或通信单元509而被载入和/或安装到设备500上。当计算机程序加载到RAM 503并由计算单元501执行时,可以执行上文描述的图像拍摄方法或图像分类模型训练方法的一个或多个步骤。备选地,在其他实施例中,计算单元501可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行图像拍摄方法或图像分类模型训练方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介 质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是 云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (19)

  1. 一种图像拍摄方法,包括:
    确定图像采集装置拍摄得到的待识别图像(S101);
    基于预训练的目标图像分类模型确定所述待识别图像的类型(S102);
    如果所述待识别图像的类型为用户欲存储的类型,则将所述待识别图像进行存储(S103)。
  2. 根据权利要求1所述的方法,其中,所述预训练的目标图像分类模型基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到。
  3. 根据权利要求1所述的方法,其中,基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到所述预训练的目标图像分类模型,包括:
    基于所述用户选定的欲存储的图像类型,确定预训练的图像分类模型;
    基于用户选定的欲存储的图像类型以及各图像类型对应的图像对所述预训练的图像分类模型进行微调,得到目标图像分类模型。
  4. 根据权利要求1所述的方法,其中,所述确定图像采集装置拍摄得到的待识别图像,包括:
    获取图像采集装置拍摄的待识别视频;
    基于获取的所述待识别视频通过聚类算法确定待识别图像。
  5. 根据权利要求4所述的方法,其中,所述聚类算法为k-means聚类算法;其中,k值基于待识别视频的时长与拍摄待识别视频时用户的行驶车速确定。
  6. 根据权利要求1所述的方法,其中,将所述待识别图像进行存储,包括:
    基于待识别图像的类型将所述待识别图像分类存储。
  7. 一种图像分类模型训练方法,包括:
    接收上传的用户欲存储的图像类型以及各图像类型对应的图 像(S201);
    基于所述上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型(S202);
    发送训练得到的所述目标图像分类模型(S203)。
  8. 根据权利要求7所述的方法,其中,基于所述上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型,包括:
    基于接收到的上传的用户欲存储的图像类型,确定预训练的图像分类模型;
    基于上传的用户欲存储的图像类型以及各图像类型对应的图像对所述预训练的图像分类模型进行微调,得到目标图像分类模型。
  9. 一种图像拍摄装置(30),包括:
    第一确定模块(301),被配置为确定图像采集装置拍摄得到的待识别图像;
    第二确定模块(302),被配置为基于预训练的目标图像分类模型确定所述待识别图像的类型;
    存储模块(303),被配置为如果所述待识别图像的类型为用户欲存储的类型,则将所述待识别图像进行存储。
  10. 根据权利要求9的图像拍摄装置,其中,所述预训练的目标图像分类模型基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到。
  11. 根据权利要求9所述的装置,其中,基于用户选定的欲存储的图像类型以及各图像类型对应的图像训练得到所述预训练的目标图像分类模型,包括:基于用户选定的欲存储的图像类型,确定预训练的图像分类模型;以及基于用户选定的欲存储的图像类型以及各图像类型对应的图像对所述预训练的图像分类模型进行微调,得到目标图像分类模型。
  12. 根据权利要求9所述的装置,其中,所述第一确定模块包括:
    获取单元,被配置为获取图像采集装置拍摄的待识别视频;
    第一确定单元,被配置为基于获取的所述待识别视频通过聚类算法确定待识别图像。
  13. 根据权利要求9所述的装置,其中,所述聚类算法为k-means聚类算法;其中,k值基于待识别视频的时长与拍摄待识别视频时用户的行驶车速确定。
  14. 根据权利要求9所述的装置,其中,所述存储模块,具体用于基于待识别图像的类型将所述待识别图像分类存储。
  15. 一种图像分类模型训练装置(40),包括:
    接收模块(401),被配置为接收上传的用户欲存储的图像类型以及各图像类型对应的图像;
    训练模块(402),被配置为基于所述上传的用户欲存储的图像类型以及各图像类型对应的图像,训练得到目标图像分类模型;
    发送模块(403),被配置为发送训练得到的所述目标图像分类模型。
  16. 根据权利要求15所述的装置,其中,所述训练模块包括:
    第二确定单元,被配置为基于接收到的上传的用户欲存储的图像类型,确定预训练的图像分类模型;
    微调单元,被配置为基于上传的用户欲存储的图像类型以及各图像类型对应的图像对所述预训练的图像分类模型进行微调,得到目标图像分类模型。
  17. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任一项所述的方法。
  18. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-8中任一项所述的方法。
  19. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-8中任一项所述的方法。
PCT/CN2021/125788 2021-06-30 2021-10-22 图像拍摄方法、图像分类模型训练方法、装置及电子设备 WO2023273035A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110742070.4A CN113469250A (zh) 2021-06-30 2021-06-30 图像拍摄方法、图像分类模型训练方法、装置及电子设备
CN202110742070.4 2021-06-30

Publications (1)

Publication Number Publication Date
WO2023273035A1 true WO2023273035A1 (zh) 2023-01-05

Family

ID=77877031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125788 WO2023273035A1 (zh) 2021-06-30 2021-10-22 图像拍摄方法、图像分类模型训练方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN113469250A (zh)
WO (1) WO2023273035A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469250A (zh) * 2021-06-30 2021-10-01 阿波罗智联(北京)科技有限公司 图像拍摄方法、图像分类模型训练方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109878A (zh) * 2018-01-10 2019-08-09 广东欧珀移动通信有限公司 相册管理方法、装置、存储介质及电子设备
WO2019157690A1 (zh) * 2018-02-14 2019-08-22 深圳市大疆创新科技有限公司 自动抓拍方法及装置、无人机及存储介质
CN111077159A (zh) * 2019-12-31 2020-04-28 北京京天威科技发展有限公司 轨道电路箱盒故障检测方法、系统、设备及可读介质
CN111147764A (zh) * 2019-12-31 2020-05-12 北京京天威科技发展有限公司 基于实时图像识别的漏泄同轴电缆图像采集方法及系统
US20200258215A1 (en) * 2019-02-11 2020-08-13 International Business Machines Corporation Methods and systems for determining a diagnostically unacceptable medical image
CN113469250A (zh) * 2021-06-30 2021-10-01 阿波罗智联(北京)科技有限公司 图像拍摄方法、图像分类模型训练方法、装置及电子设备

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217205B (zh) * 2013-05-29 2018-05-18 华为技术有限公司 一种识别用户活动类型的方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109878A (zh) * 2018-01-10 2019-08-09 广东欧珀移动通信有限公司 相册管理方法、装置、存储介质及电子设备
WO2019157690A1 (zh) * 2018-02-14 2019-08-22 深圳市大疆创新科技有限公司 自动抓拍方法及装置、无人机及存储介质
US20200258215A1 (en) * 2019-02-11 2020-08-13 International Business Machines Corporation Methods and systems for determining a diagnostically unacceptable medical image
CN111077159A (zh) * 2019-12-31 2020-04-28 北京京天威科技发展有限公司 轨道电路箱盒故障检测方法、系统、设备及可读介质
CN111147764A (zh) * 2019-12-31 2020-05-12 北京京天威科技发展有限公司 基于实时图像识别的漏泄同轴电缆图像采集方法及系统
CN113469250A (zh) * 2021-06-30 2021-10-01 阿波罗智联(北京)科技有限公司 图像拍摄方法、图像分类模型训练方法、装置及电子设备

Also Published As

Publication number Publication date
CN113469250A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
US10885100B2 (en) Thumbnail-based image sharing method and terminal
CN113255694B (zh) 训练图像特征提取模型和提取图像特征的方法、装置
CN107578017B (zh) 用于生成图像的方法和装置
CN112465008B (zh) 一种基于自监督课程学习的语音和视觉关联性增强方法
WO2020199704A1 (zh) 文本识别
WO2023273769A1 (zh) 视频标签推荐模型的训练方法和确定视频标签的方法
US20230069197A1 (en) Method, apparatus, device and storage medium for training video recognition model
US11164004B2 (en) Keyframe scheduling method and apparatus, electronic device, program and medium
WO2020047854A1 (en) Detecting objects in video frames using similarity detectors
WO2022166625A1 (zh) 一种车辆行驶场景中信息推送的方法以及相关装置
WO2023016007A1 (zh) 人脸识别模型的训练方法、装置及计算机程序产品
US10445586B2 (en) Deep learning on image frames to generate a summary
CN113379627B (zh) 图像增强模型的训练方法和对图像进行增强的方法
WO2022247343A1 (zh) 识别模型训练方法、识别方法、装置、设备及存储介质
WO2022227765A1 (zh) 生成图像修复模型的方法、设备、介质及程序产品
WO2022142212A1 (zh) 一种手写识别方法、装置、电子设备及介质
WO2023178930A1 (zh) 图像识别方法、训练方法、装置、系统及存储介质
WO2023273035A1 (zh) 图像拍摄方法、图像分类模型训练方法、装置及电子设备
CN113810765B (zh) 视频处理方法、装置、设备和介质
CN112650885A (zh) 视频分类方法、装置、设备和介质
US10019781B2 (en) Image processing of objects and a background
CN113780578B (zh) 模型训练方法、装置、电子设备及可读存储介质
KR102246110B1 (ko) 영상 처리 장치 및 그 영상 처리 방법
WO2023024424A1 (zh) 分割网络训练方法、使用方法、装置、设备及存储介质
US20220335316A1 (en) Data annotation method and apparatus, electronic device and readable storage medium

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE