CN107766815B - Visual auxiliary service operation method - Google Patents

Visual auxiliary service operation method Download PDF

Info

Publication number
CN107766815B
CN107766815B CN201710968408.1A CN201710968408A CN107766815B CN 107766815 B CN107766815 B CN 107766815B CN 201710968408 A CN201710968408 A CN 201710968408A CN 107766815 B CN107766815 B CN 107766815B
Authority
CN
China
Prior art keywords
user
layer
convolution
management platform
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710968408.1A
Other languages
Chinese (zh)
Other versions
CN107766815A (en
Inventor
余春艳
徐小丹
陈璐
陈格
王秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201710968408.1A priority Critical patent/CN107766815B/en
Publication of CN107766815A publication Critical patent/CN107766815A/en
Application granted granted Critical
Publication of CN107766815B publication Critical patent/CN107766815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a vision auxiliary service operation system and an operation method. The system comprises a user terminal and an operation service management platform; the user terminal is an application program installed on the user's own Internet terminal equipment or communication equipment and is used for sending a service request and a geographic position to the operation service management platform and receiving and processing a feedback result; the operation service management platform realizes the selection of the parameters of the generated countermeasure model and the charging of the valuation according to the feedback information of the user terminal through the user database, the database for generating the parameters of the countermeasure model and the like. The visual auxiliary service operation system can be used in any place, and has the advantages of low operation cost and strong popularization.

Description

Visual auxiliary service operation method
Technical Field
The invention relates to a vision auxiliary service system, in particular to a vision auxiliary service operation system capable of realizing all-weather vision active enhancement, and specifically relates to a vision auxiliary service operation method.
Background
Currently, the main function of the automobile data recorder is to acquire high-resolution video data by means of a high-definition camera and store the high-resolution video data in a memory card, and the purpose of the automobile data recorder is generally to be used as a basis for traffic accident responsibility confirmation. However, as hardware technology matures and performance improves, the function of the automobile data recorder is not limited to the above. For example, the function of night vision enhancement can be realized by enhancing the photosensitive area of the camera, so that the display presents a video image which is clearer than that observed by human eyes.
The high-haze-degree section of the northern area of China is observed, the visibility is extremely low, great potential safety hazards are brought for traveling, particularly on a driving road, the sight is unclear, large or small traffic accidents are easily caused, even porcelain collision is caused, road monitoring facilities are also influenced, product investigation is carried out on the automobile data recorder sold on the market at present, the fact that the existing automobile data recorder lacks the video restoration capability under the conditions of severe weather (such as fog, haze, low illumination and the like) is found, and the function of removing haze is temporarily absent in the similar ADAS system on the market. In addition, haze does not have an effective natural model, so that the existing visual auxiliary service technology is still to be improved and developed, particularly for visual auxiliary service in a haze environment.
Disclosure of Invention
The invention aims to provide a visual auxiliary service operation method aiming at the problems of the existing automobile data recorder, the visual auxiliary service operation system can be used in any place, and the visual auxiliary service operation method has the advantages of low operation cost and strong popularization.
In order to achieve the purpose, the technical scheme of the invention is as follows: a vision auxiliary service operation method provides a vision auxiliary service operation system, which comprises a user terminal and an operation management platform;
the user terminal is an application program installed on the user's own internet terminal equipment or communication equipment and is used for sending a service request and a geographic position to the operation management platform and receiving and processing a feedback result; the user terminal comprises a user input module, a user output module, a user sending module, a user receiving module, a positioning module and a processing module; the processing module is responsible for performing haze removal and defogging treatment on the video according to the received generated confrontation model parameters;
the operation management platform is used for receiving and responding to the user model updating request and forming a service management system with each user terminal; the operation management platform comprises a platform receiving unit, a platform feedback unit and an updating unit; wherein the updating unit is responsible for updating the user database and generating the confrontation model parameter database;
the method comprises the following steps:
(a) a user sends a request for generating an updating countermeasure model parameter with address information to an operation management platform;
(b) the operation management platform compares the user request with the generated countermeasure model parameter database, selects generated countermeasure model parameters matched with the user request and feeds back the parameters to the user;
(c) when the user request is confirmed and submitted, the operation management platform judges whether the user can use the function or not through the user database according to request information sent by the user, and if the user cannot use the function, the operation management platform sends authentication information to the user to confirm identity and carry out prepayment; synchronously updating a related user database by the operation management platform;
(d) the operation management platform updates and generates a countermeasure model parameter database, which specifically comprises the following steps:
(d1) collecting haze images and haze-free images with different concentrations, and clustering the brightness histogram to obtain 16 haze concentration data sets;
(d2) training the collected images through a generation countermeasure model consisting of two generators and a discriminator to obtain universal generator parameters and discriminator parameters;
(d3) respectively carrying out fine adjustment on the generated countermeasure model parameters obtained by the generated countermeasure model pair (d 2) consisting of two generators and a discriminator on the data sets with the 16 kinds of haze concentrations to obtain the generated countermeasure model parameters under each concentration;
(d4) generating a haze image with corresponding haze concentration by using the generated countermeasure model parameters obtained in the step (d 3) of each positioned haze-free image, training by using a generated countermeasure model consisting of a generator and a discriminator to obtain the generated countermeasure model parameters with corresponding concentration of each position, and storing the generated countermeasure model parameters in a generated countermeasure model parameter database;
(d5) repeating the steps (d 1) to (d 4) every time 10 million corresponding haze images and haze-free image data are supplemented, and updating the antagonizing model parameter database.
In an embodiment of the present invention, the user input module, the user output module, the user sending module, the user receiving module, the positioning module and the processing module are separately docked or integrated, and the user input module and the user output module are docked with an external system or device;
the user input module is used for inputting information by a user;
the user output module is used for outputting information to a user;
the user sending module is used for uploading/submitting information to the operation service management platform by a user;
the user receiving module is used for the user to download/receive the information sent to the user by the service management system;
the positioning module is used for positioning the position information of the corresponding user terminal and sending the position information to the operation management platform;
the processing module is used for processing information sent by the user downloading/receiving operation management platform to the user and received visual information.
In an embodiment of the present invention, an operation method of a visual assistance service operation system is as follows:
(a) a user sends a request for generating an updating countermeasure model to an operation management platform;
(b) the operation management platform selects a generated confrontation model parameter matched with the request of the user according to the positioning information and the request information of each user terminal, and feeds back the relevant generated confrontation model parameter to the user;
(c) and the user receives the corresponding generated countermeasure model parameters, and the image or the video is subjected to defogging and haze removal through the generated countermeasure model parameters.
In an embodiment of the present invention, an operation method of a visual assistance service operation system is as follows: the platform receiving unit, the platform feedback unit and the updating unit are separate or integrated.
In an embodiment of the present invention, the operation management platform further includes a fee deduction operation unit.
In an embodiment of the present invention, the fee deduction operation unit is configured to perform fee deduction operation according to a payment mode of a user and whether the visual enhancement service is used.
In an embodiment of the present invention, the platform receiving unit is configured to receive location information of the user terminal fed back by the user terminal and generate an update requirement of the countermeasure model parameter.
In an embodiment of the present invention, the platform feedback unit is configured to screen generated countermeasure model parameter information that meets the user location information and the user request parameter update requirement, and feed the screened generated countermeasure model parameter information back to the user terminal.
In an embodiment of the present invention, the implementation of (d 2) is as follows,
the generator is constructed as follows:
the first layer is a data layer, and the input image is scaled to 256 × 256;
the second layer is a convolution layer, 64 convolution kernels are used, the size is (7, 7, 3), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (3, 3 and 64), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (3, 3, 128), the next layer is a BatchNorm layer, and the activation function is a sigmoid function;
the fifth layer to the thirteenth layer are two-stage convolution, the first-stage convolution uses 256 convolution kernels, the size is (3, 3 and 256), then a BatchNorm layer is connected, and the activation function is a sigmoid function; the second convolution uses 256 convolution kernels of size (3, 3, 256) followed by a BatchNorm layer;
the fourteenth layer is a convolutional layer, 128 convolutional kernels are used, the size is (3, 3, 256), a BatchNorm layer is connected after the convolutional layer, and the activation function is a sigmoid function;
the fifteenth layer is a convolutional layer, using 64 convolutional kernels, with the size of (3, 3, 128), followed by a BatchNorm layer, and the activation function is a sigmoid function;
the sixteenth layer is a convolution layer, 3 convolution kernels are used, the size is (7, 7, 64), and the activation function is a hyperbolic tangent function;
the discriminator is constructed as follows:
the first layer is a data layer, and the size of an input image is 256 × 256;
the second layer is a convolutional layer, 64 convolutional kernels are used, the size is (4, 4, 3), and the activation function is a LeakyRelu function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (4, 4, 64), the BatchNorm layer is connected after the convolution kernel layer, and the activation function is a LeakyRelu function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), the next layer is a BatchNorm layer, and the activation function is a LeakyRelu function;
the fifth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), a BatchNorm layer is connected after the convolution kernel, and the activation function is a LeakyRelu function;
the sixth layer is a convolutional layer, using 1 convolutional kernel, with a size of (4, 4, 256).
Compared with the prior art, the invention has the following beneficial effects: the visual auxiliary service operation system can be used in any place, and has the advantages of low operation cost and strong popularization.
Drawings
Fig. 1 is a schematic structural diagram of a visual assistance service operation system of the present invention.
Fig. 2 is a block diagram of a generative confrontation model including two generators and an arbiter applied in an embodiment of the present invention.
Fig. 3 is a block diagram of a generator applied in an embodiment of the present invention.
Fig. 4 is a block diagram of an arbiter applied in an embodiment of the present invention.
FIG. 5 is a diagram of a generated confrontation model including a generator and an arbiter as applied in an embodiment of the present invention.
FIG. 6 is a graph of the visualization effect of the haze removal result in the embodiment of the invention.
Detailed Description
The technical scheme of the invention is specifically explained below with reference to the accompanying drawings.
The invention discloses a visual auxiliary service operation system which is based on a mobile internet and/or an internet of things and realizes the personalized visual enhancement service requirement of a user by making and optimizing a series of parameters of a generation countermeasure model with a spatial geographic position attribute in real time.
As shown in fig. 1, a visual assistance service operation system includes a user terminal and an operation management platform, wherein:
the user terminal is an application program installed on the user's own Internet terminal equipment or communication equipment and is used for sending a service request and a geographic position to the operation service management platform and receiving and processing a feedback result; it includes: the system comprises a user input module, a user output module, a user sending module, a user receiving module, a positioning module and a processing module;
the processing module is responsible for carrying out haze removal and defogging treatment on the video according to the received generated confrontation model parameters;
the operation management platform is used for receiving and responding to the user model updating request and forming a service management system with each user terminal; the system comprises a platform receiving unit, a platform feedback unit and an updating unit;
the updating unit is responsible for updating the user database and generating the confrontation model parameter database and the like.
The user terminal is used for sending a service request and a geographic position to an operation business management platform and receiving and processing a feedback result through an application program installed on the user's own Internet terminal equipment or communication equipment; it includes: the system comprises a user input module, a user output module, a user sending module, a user receiving module, a positioning module and a processing module;
the modules are separately butted or integrated, and the input/output module can be butted with other systems or equipment, such as user auxiliary equipment such as a printer, a U disk, a mobile hard disk and the like;
the user input module is used for inputting information by a user, and can adopt a mobile phone, a PDA, a tablet personal computer with a network communication function or a special device;
the user output module is used for outputting information to a user, and can adopt a mobile phone, a PDA, a tablet personal computer with a network communication function or a special device;
the user sending module is used for uploading/submitting an individual visual service request with a space address and sending confirmation information to the operation service management platform by a user;
the user receiving module is used for the user to download/receive the information sent by the service management system to the user.
The positioning module is used for positioning the position information of the corresponding user terminal and sending the position information to the operation service management platform, and the position information is GPS positioning information;
the processing module is used for processing information sent by the user downloading/receiving operation management platform to the user and received visual information.
The operation management platform comprises a platform receiving unit, a platform feedback unit and an updating unit, wherein the units are separated or integrated. The operation management platform further comprises a fee deduction operation unit.
And the platform receiving unit of the operation management platform is used for receiving the update request of the parameters of the generated countermeasure model of the user terminal, the position information of the user terminal fed back by the user terminal and the update requirement of the parameters of the generated countermeasure model.
And the platform feedback unit of the operation management platform is used for screening out the generated countermeasure model parameter information which is in accordance with the user position information and the user request parameter updating requirement and feeding the generated countermeasure model parameter information back to the user terminal.
And the updating unit of the operation management platform is used for updating the user database and generating the countermeasure model parameter database.
The invention also provides a visual auxiliary service operation method, which applies the visual auxiliary service operation system and comprises the following steps:
(a) a user sends a request for generating an updating countermeasure model parameter with address information to an operation management platform;
(b) the operation management platform compares the user request with the generated countermeasure model parameter database, selects generated countermeasure model parameter information close to the user selection, and feeds back the generated countermeasure model parameter information to the user
(c) When the user request is confirmed and submitted, the operation management platform judges whether the user can use the function according to the request information sent by the user and the user database, if the user cannot use the function, the operation management platform sends authentication information to the user to confirm identity and carry out prepayment; and the operation management platform synchronously updates the related user database.
(d) The method comprises the following steps of updating and generating a countermeasure model parameter database by an operation management platform, and specifically comprises the following steps:
(d1) collecting haze images and haze-free images with different concentrations, and clustering the brightness histogram to obtain 16 haze concentration data sets;
(d2) constructing a generation countermeasure model composed of two generators and a discriminator according to the following structure as shown in FIG. 2, and training collected images to obtain general generator parameters and discriminator parameters;
the construction generator shown in fig. 3 is constructed as follows:
the first layer is a data layer, and the input image is scaled to 256 × 256;
the second layer is a convolution layer, 64 convolution kernels are used, the size is (7, 7, 3), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (3, 3 and 64), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (3, 3, 128), the next layer is a BatchNorm layer, and the activation function is a sigmoid function;
the fifth layer to the thirteenth layer are two-stage convolution, the first-stage convolution uses 256 convolution kernels, the size is (3, 3 and 256), then a BatchNorm layer is connected, and the activation function is a sigmoid function; the second convolution uses 256 convolution kernels of size (3, 3, 256) followed by a BatchNorm layer;
the fourteenth layer is a convolutional layer, 128 convolutional kernels are used, the size is (3, 3, 256), a BatchNorm layer is connected after the convolutional layer, and the activation function is a sigmoid function;
the fifteenth layer is a convolutional layer, using 64 convolutional kernels, with the size of (3, 3, 128), followed by a BatchNorm layer, and the activation function is a sigmoid function;
the sixteenth layer is a convolution layer, 3 convolution kernels are used, the size is (7, 7, 64), and the activation function is a hyperbolic tangent function;
the construction discriminator shown in fig. 4 is constructed as follows:
the first layer is a data layer, and the size of an input image is 256 × 256;
the second layer is a convolutional layer, 64 convolutional kernels are used, the size is (4, 4, 3), and the activation function is a LeakyRelu function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (4, 4, 64), the BatchNorm layer is connected after the convolution kernel layer, and the activation function is a LeakyRelu function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), the next layer is a BatchNorm layer, and the activation function is a LeakyRelu function;
the fifth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), a BatchNorm layer is connected after the convolution kernel, and the activation function is a LeakyRelu function;
the sixth layer is a convolutional layer, using 1 convolutional kernel, with size (4, 4, 256)
(d3) Respectively carrying out fine adjustment on the generated countermeasure model parameters obtained by the generated countermeasure model pair (d 2) consisting of two generators and a discriminator on the data sets with the 16 kinds of haze concentrations to obtain the generated countermeasure model parameters under each concentration;
(d4) generating a haze image with corresponding haze concentration by using the generated countermeasure model parameters obtained in the step (d 3) of each positioned haze-free image, constructing a generated countermeasure model composed of a generator and a discriminator according to the following structure and training the generated countermeasure model composed of the generator and the discriminator to obtain the generated countermeasure model parameters with corresponding concentration of each position, and storing the generated countermeasure model parameters in a generated countermeasure model parameter database;
(d5) repeating the steps (d 1) to (d 4) every time 10 million corresponding haze images and haze-free image data are supplemented, and updating the antagonizing model parameter database.
FIG. 6 is a visual effect diagram of haze removal results using the system and method of the present invention.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.

Claims (9)

1. A visual auxiliary service operation method is characterized in that a visual auxiliary service operation system is provided, which comprises a user terminal and an operation management platform;
the user terminal is an application program installed on the user's own internet terminal equipment or communication equipment and is used for sending a service request and a geographic position to the operation management platform and receiving and processing a feedback result; the user terminal comprises a user input module, a user output module, a user sending module, a user receiving module, a positioning module and a processing module; the processing module is responsible for performing haze removal and defogging treatment on the video according to the received generated confrontation model parameters;
the operation management platform is used for receiving and responding to the user model updating request and forming a service management system with each user terminal; the operation management platform comprises a platform receiving unit, a platform feedback unit and an updating unit; wherein the updating unit is responsible for updating the user database and generating the confrontation model parameter database;
the method comprises the following steps:
(a) a user sends a request for generating an updating countermeasure model parameter with address information to an operation management platform;
(b) the operation management platform compares the user request with the generated countermeasure model parameter database, selects generated countermeasure model parameters matched with the user request and feeds back the parameters to the user;
(c) when the user request is confirmed and submitted, the operation management platform judges whether the user can use the function or not through the user database according to request information sent by the user, and if the user cannot use the function, the operation management platform sends authentication information to the user to confirm identity and carry out prepayment; synchronously updating a related user database by the operation management platform;
(d) the operation management platform updates and generates a countermeasure model parameter database, which specifically comprises the following steps:
(d1) collecting haze images and haze-free images with different concentrations, and clustering the brightness histogram to obtain 16 haze concentration data sets;
(d2) training the collected images through a generation countermeasure model consisting of two generators and a discriminator to obtain universal generator parameters and discriminator parameters;
(d3) respectively carrying out fine adjustment on the generated countermeasure model parameters obtained by the generated countermeasure model pair (d 2) consisting of two generators and a discriminator on the data sets with the 16 kinds of haze concentrations to obtain the generated countermeasure model parameters under each concentration;
(d4) generating a haze image with corresponding haze concentration by using the generated countermeasure model parameters obtained in the step (d 3) of each positioned haze-free image, training by using a generated countermeasure model consisting of a generator and a discriminator to obtain the generated countermeasure model parameters with corresponding concentration of each position, and storing the generated countermeasure model parameters in a generated countermeasure model parameter database;
(d5) repeating the steps (d 1) to (d 4) every time 10 million corresponding haze images and haze-free image data are supplemented, and updating the antagonizing model parameter database.
2. A visual assistance service operation method according to claim 1, characterized by: the user input module, the user output module, the user sending module, the user receiving module, the positioning module and the processing module are separately butted or integrated, and the user input module and the user output module are butted with an external system or equipment;
the user input module is used for inputting information by a user;
the user output module is used for outputting information to a user;
the user sending module is used for uploading/submitting information to the operation service management platform by a user;
the user receiving module is used for the user to download/receive the information sent to the user by the service management system;
the positioning module is used for positioning the position information of the corresponding user terminal and sending the position information to the operation management platform;
the processing module is used for processing information sent by the user downloading/receiving operation management platform to the user and received visual information.
3. A visual assistance service operation method according to claim 1, characterized by: the system operates as follows:
(a) a user sends a request for generating an updating countermeasure model to an operation management platform;
(b) the operation management platform selects a generated confrontation model parameter matched with the request of the user according to the positioning information and the request information of each user terminal, and feeds back the relevant generated confrontation model parameter to the user;
(c) and the user receives the corresponding generated countermeasure model parameters, and the image or the video is subjected to defogging and haze removal through the generated countermeasure model parameters.
4. A visual assistance service operation method according to claim 1, characterized by: the system operates as follows: the platform receiving unit, the platform feedback unit and the updating unit are separate or integrated.
5. A visual assistance service operation method according to claim 1, characterized by: the operation management platform further comprises a fee deduction operation unit.
6. A visual assistance service operation method according to claim 5, characterized in that: and the fee deducting operation unit is used for carrying out fee deducting operation according to the payment mode of the user and the condition of whether the visual enhancement service is used or not.
7. A visual assistance service operation method according to claim 1, characterized by: the platform receiving unit is used for receiving the position information of the user terminal fed back by the user terminal and generating the requirement for updating the parameters of the countermeasure model.
8. A visual assistance service operation method according to claim 1, characterized by: and the platform feedback unit is used for screening out the generated countermeasure model parameter information which meets the user position information and the user request parameter updating requirement and feeding the generated countermeasure model parameter information back to the user terminal.
9. A visual assistance service operation method according to claim 1, characterized by: the specific implementation process of (d 2) is as follows,
the generator is constructed as follows:
the first layer is a data layer, and the input image is scaled to 256 × 256;
the second layer is a convolution layer, 64 convolution kernels are used, the size is (7, 7, 3), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (3, 3 and 64), the BatchNorm layer is connected after the convolution kernel, and the activation function is a sigmoid function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (3, 3, 128), the next layer is a BatchNorm layer, and the activation function is a sigmoid function;
the fifth layer to the thirteenth layer are two-stage convolution, the first-stage convolution uses 256 convolution kernels, the size is (3, 3 and 256), then a BatchNorm layer is connected, and the activation function is a sigmoid function; the second convolution uses 256 convolution kernels of size (3, 3, 256) followed by a BatchNorm layer;
the fourteenth layer is a convolutional layer, 128 convolutional kernels are used, the size is (3, 3, 256), a BatchNorm layer is connected after the convolutional layer, and the activation function is a sigmoid function;
the fifteenth layer is a convolutional layer, using 64 convolutional kernels, with the size of (3, 3, 128), followed by a BatchNorm layer, and the activation function is a sigmoid function;
the sixteenth layer is a convolution layer, 3 convolution kernels are used, the size is (7, 7, 64), and the activation function is a hyperbolic tangent function;
the discriminator is constructed as follows:
the first layer is a data layer, and the size of an input image is 256 × 256;
the second layer is a convolutional layer, 64 convolutional kernels are used, the size is (4, 4, 3), and the activation function is a LeakyRelu function;
the third layer is a convolution layer, 128 convolution kernels are used, the size is (4, 4, 64), the BatchNorm layer is connected after the convolution kernel layer, and the activation function is a LeakyRelu function;
the fourth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), the next layer is a BatchNorm layer, and the activation function is a LeakyRelu function;
the fifth layer is a convolution layer, 256 convolution kernels are used, the size is (4, 4, 128), a BatchNorm layer is connected after the convolution kernel, and the activation function is a LeakyRelu function;
the sixth layer is a convolutional layer, using 1 convolutional kernel, with a size of (4, 4, 256).
CN201710968408.1A 2017-10-18 2017-10-18 Visual auxiliary service operation method Active CN107766815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710968408.1A CN107766815B (en) 2017-10-18 2017-10-18 Visual auxiliary service operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710968408.1A CN107766815B (en) 2017-10-18 2017-10-18 Visual auxiliary service operation method

Publications (2)

Publication Number Publication Date
CN107766815A CN107766815A (en) 2018-03-06
CN107766815B true CN107766815B (en) 2021-05-18

Family

ID=61269714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710968408.1A Active CN107766815B (en) 2017-10-18 2017-10-18 Visual auxiliary service operation method

Country Status (1)

Country Link
CN (1) CN107766815B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272455B (en) * 2018-05-17 2021-05-04 西安电子科技大学 Image defogging method based on weak supervision generation countermeasure network
CN109376903B (en) * 2018-09-10 2021-12-17 浙江工业大学 PM2.5 concentration value prediction method based on game neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809773A (en) * 2016-04-19 2016-07-27 北京新能源汽车股份有限公司 Vehicle, vehicle system and parameter adjusting method of vehicle video device
CN106104207A (en) * 2014-03-24 2016-11-09 高通股份有限公司 For transmitting the system of positioning assistance data, method and apparatus
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN106446098A (en) * 2016-09-13 2017-02-22 四川研宝科技有限公司 Live action image processing method and server based on location information
CN107215294A (en) * 2016-03-21 2017-09-29 现代自动车株式会社 The determination of device location in crowded indoor environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106104207A (en) * 2014-03-24 2016-11-09 高通股份有限公司 For transmitting the system of positioning assistance data, method and apparatus
CN107215294A (en) * 2016-03-21 2017-09-29 现代自动车株式会社 The determination of device location in crowded indoor environment
CN105809773A (en) * 2016-04-19 2016-07-27 北京新能源汽车股份有限公司 Vehicle, vehicle system and parameter adjusting method of vehicle video device
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN106446098A (en) * 2016-09-13 2017-02-22 四川研宝科技有限公司 Live action image processing method and server based on location information

Also Published As

Publication number Publication date
CN107766815A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN109902575B (en) Anti-walking method and device based on unmanned vehicle and related equipment
CN107766815B (en) Visual auxiliary service operation method
WO2022166625A1 (en) Method for information pushing in vehicle travel scenario, and related apparatus
US11830103B2 (en) Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN103108042A (en) Vehicle-mounted information collection system
EP4365757A1 (en) Method and apparatus for analyzing positional relationship between target objects, and storage medium and electronic device
DE102013016356A1 (en) Method for using a camera system of a motor vehicle as an anti-theft alarm, camera system and motor vehicle
CN113496472A (en) Image defogging model construction method, road image defogging device and vehicle
CN109492821A (en) A kind of stability maintenance method for early warning and system, electronic equipment
CN108960398A (en) A kind of overcrowding actively monitoring device of the car of wireless remote and monitoring method
CN115408609A (en) Parking route recommendation method and device, electronic equipment and computer readable medium
CN111210746A (en) Advertising system and method using removable advertising medium
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN111354210B (en) Map generation device, map generation system, map generation method, and map generation program
CN113887297A (en) Safe driving monitoring method and device for forming data closed loop based on cloud
CN114040129A (en) Video generation method, device, equipment and storage medium
CN112261586A (en) Method for automatically identifying driver to limit driving range of driver by using vehicle-mounted robot
CN113408325A (en) Method and device for identifying surrounding environment of vehicle and related equipment
CN111369794B (en) Method, device and equipment for determining traffic participation information and storage medium
US20240119344A1 (en) System and method for expediting distributed feedback for developing of machine learning classifiers
CN112581339B (en) Smart city social element expression system with fairness and openness
CN109741196A (en) Processing method and equipment after vehicle collision
CN210781107U (en) Vehicle-mounted data processing terminal and system
CN115049895B (en) Image attribute identification method, attribute identification model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant