CN116095469A - Method, system, equipment set and storage medium for vehicle monitoring - Google Patents

Method, system, equipment set and storage medium for vehicle monitoring Download PDF

Info

Publication number
CN116095469A
CN116095469A CN202310091938.8A CN202310091938A CN116095469A CN 116095469 A CN116095469 A CN 116095469A CN 202310091938 A CN202310091938 A CN 202310091938A CN 116095469 A CN116095469 A CN 116095469A
Authority
CN
China
Prior art keywords
target vehicle
domain controller
cameras
images
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091938.8A
Other languages
Chinese (zh)
Inventor
孙朋礼
吴媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202310091938.8A priority Critical patent/CN116095469A/en
Publication of CN116095469A publication Critical patent/CN116095469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application discloses a method, a system, a device group and a storage medium for monitoring a vehicle, and belongs to the technical field of automobiles. The method comprises the following steps: when the target vehicle is in a parking and locking state and any sensor detects that the object activity exists around the target vehicle, the domain controller of the target vehicle wakes up a plurality of cameras to shoot images. The target vehicle can recognize the photographed image through the safety recognition model, and if the recognition result is unsafe, the domain controller can save the photographed image, upload the photographed image to the server, and inform the user to view. Thus, if the object activity causes damage to the target vehicle, the user may retrieve the captured image from the domain controller or server, and find the hit person or hit vehicle, in order to be economically compensated. By adopting the embodiment of the application, the possibility of property loss can be reduced.

Description

Method, system, equipment set and storage medium for vehicle monitoring
Technical Field
The present disclosure relates to the field of automotive technologies, and in particular, to a method, a system, a device group, and a storage medium for vehicle monitoring.
Background
With the development of society and the progress of scientific technology, automobiles in the whole country are increasingly held. In this case, the number of parking spaces becomes insufficient, and the driver often needs to park the vehicle to a remote or remote parking space.
Therefore, when the vehicle is scratched or stolen by other vehicles, the driver cannot acquire relevant evidence because no camera is arranged near the parking space, and larger property loss is caused.
Disclosure of Invention
The embodiment of the application provides a vehicle monitoring method, a system, a device group and a storage medium, which can solve the problems of related technologies. The technical proposal is as follows:
in a first aspect, a method for vehicle monitoring is provided, the method being applied to a vehicle monitoring system, the vehicle monitoring system including a mobile terminal, a server, and a target vehicle, the target vehicle having a plurality of cameras, a plurality of ultrasonic sensors, an inclination sensor, an burglar alarm, a domain controller, and a wireless gateway, comprising:
when the target vehicle is in a parking and locking state, and the number of times of the periodical detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold, the domain controller wakes up the cameras;
The domain controller controls the cameras to start shooting images around the target vehicle;
the cameras send the shot images to the domain controller;
stopping shooting and entering a dormant state when the shooting time of the cameras reaches the preset shooting time;
the domain controller inputs images shot by the cameras in a preset shooting time period into a safety recognition model, deletes the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally stores the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sends the images shot in the preset shooting time period to the wireless gateway, wherein the safety recognition model is a machine learning model;
the wireless gateway sends the images shot in the preset shooting time to a server;
the server stores the images shot in the preset shooting time and sends the images to the mobile terminal;
the mobile terminal displays prompt information, wherein the prompt information is used for prompting a driver to check images shot in the preset duration.
In one possible manner, before the domain controller controls the plurality of cameras to start capturing images around the target vehicle, the method further includes:
when the target vehicle is in a parking and locking state and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the appointed plane are changed, the domain controller wakes up the cameras.
In one possible manner, before the domain controller wakes up the plurality of cameras, the method further includes:
the domain controller receives an alarm notification sent by an burglar alarm of the target vehicle.
In one possible way, the burglar alarm is a fingerprint identifier;
before the domain controller receives the alarm notification sent by the burglar alarm of the target vehicle, the method further comprises:
and when the fingerprint information identified by the fingerprint identifier is not matched with the pre-stored fingerprint information, sending an alarm notification to the domain controller.
In a second aspect, a vehicle monitoring system is provided, the vehicle monitoring system including a plurality of pressure sensors, cameras, vehicle terminals and mobile terminals, servers and a target vehicle, the target vehicle having a plurality of cameras, a plurality of ultrasonic sensors, tilt sensors, burglar alarms, a domain controller and a wireless gateway, wherein:
The domain controller is used for waking up the cameras when the target vehicle is in a parking and locking state and the number of times of the periodic detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold; controlling the cameras to start shooting images around the target vehicle;
the cameras are used for sending the shot images to the domain controller; stopping shooting and entering a dormant state when the shooting time of the cameras reaches the preset shooting time;
the domain controller is further configured to input images captured by the plurality of cameras within a preset capturing period into a security identification model, delete the images captured within the preset capturing period if an output result of the security identification model is secure, locally store the images captured within the preset capturing period if the output result of the security identification model is unsafe, and send the images captured within the preset capturing period to the wireless gateway, where the security identification model is a machine learning model;
the wireless gateway is used for sending the images shot in the preset shooting time to a server;
The server is used for storing the images shot in the preset shooting time and sending the images to the mobile terminal;
the mobile terminal is used for displaying prompt information, wherein the prompt information is used for prompting a driver to check the images shot in the preset time length.
In one possible manner, the domain controller is further configured to:
when the target vehicle is in a parking and locking state and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the appointed plane are changed, the cameras are awakened.
In one possible manner, the domain controller is further configured to:
the domain controller receives an alarm notification sent by an burglar alarm of the target vehicle.
In one possible mode, the burglar alarm is a fingerprint identifier;
and the fingerprint identifier is used for sending an alarm notification to the domain controller when the identified fingerprint information is not matched with the pre-stored fingerprint information.
In a third aspect, a computer device group is provided comprising a plurality of computer devices, each computer device comprising a memory for storing computer program instructions and a processor; the processor executes the computer program instructions stored in the memory to cause the group of computer devices to perform the method of the first aspect and possible implementations thereof.
In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium storing computer program instructions that, in response to being executed by a group of computer devices, perform the method of the first aspect and possible implementations thereof.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
according to the method provided by the embodiment of the application, when any sensor detects that the object activity exists around the target vehicle in the parking and locking state of the target vehicle, the domain controller of the target vehicle wakes up a plurality of cameras to shoot images. The target vehicle can recognize the photographed image through the safety recognition model, and if the recognition result is unsafe, the domain controller can save the photographed image, upload the photographed image to the server, and inform the user to view. Thus, if the object activity causes damage to the target vehicle, the user may retrieve the captured image from the domain controller or server, and find the hit person or hit vehicle, in order to be economically compensated. In this way, the likelihood of property damage may be reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a vehicle monitoring system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for vehicle monitoring provided in an embodiment of the present application;
FIG. 5 is a flow chart of a method for vehicle monitoring provided in an embodiment of the present application;
FIG. 6 is a flow chart of a method for vehicle monitoring provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a vehicle monitoring method, which is applied to a vehicle monitoring system, wherein the system can be structurally shown in fig. 1 and comprises a mobile terminal, a server and a target vehicle, wherein the target vehicle is provided with a plurality of cameras, ultrasonic sensors, inclination sensors, burglar alarms, a domain controller and a wireless gateway. The method is used for monitoring the environment around the vehicle, and related images can be saved. The mobile terminal may be a cell phone, tablet computer, notebook computer, etc. The server may be a background server of the related application program, and the server may be a single server or may be a device group formed by a plurality of devices.
The plurality of cameras of the target vehicle may include a panoramic camera and a multi-function camera of the target vehicle. The plurality of cameras of the panoramic camera may be mounted above the front and rear bumpers of the target vehicle, on the left and right doors of the target vehicle, and on the left and right rearview mirrors of the target vehicle, respectively, or may be mounted at other positions of the target vehicle. The multifunctional camera can be arranged below the front windshield and also can be arranged at other positions of the target vehicle. The camera arranged above the bumper in front of the target vehicle and the multifunctional camera arranged below the front windshield are mainly used for shooting images in front of the target vehicle. The photographing height of the multifunctional camera mounted below the front windshield is higher than that of the camera mounted above the bumper bar in front of the target vehicle, and a face image is more likely to be photographed. A camera mounted above a bumper behind a target vehicle is mainly used to capture an image of the immediate rear of the target vehicle. The camera mounted on the left door of the target vehicle and the camera mounted on the left mirror of the target vehicle are mainly used for photographing an image of the right left side of the target vehicle. The camera mounted on the right door of the target vehicle and the camera mounted on the left rear view mirror of the target vehicle are mainly used for photographing an image of the right side of the target vehicle. The range of view from which these cameras capture images depends on the type of camera. In practical application, a wide-angle camera can be adopted, and a camera with a common visual angle can also be adopted. The ultrasonic sensors may be mounted above the front and rear bumpers of the target vehicle and on the left and right doors of the target vehicle, or may be mounted at other positions of the target vehicle. An ultrasonic sensor mounted above a bumper in front of a target vehicle is mainly used to detect an object in front of the target vehicle. An ultrasonic sensor mounted above a bumper behind a target vehicle is mainly used to detect an object behind the target vehicle. An ultrasonic sensor mounted on the left door of a target vehicle is mainly used to detect an object to the left of the target vehicle. An ultrasonic sensor mounted on the right door of a target vehicle is mainly used to detect an object to the right of the target vehicle. The tilt sensor may be mounted on the chassis of the target vehicle or may be elsewhere on the target vehicle. The burglar alarm can be installed on a door lock of a door of a target vehicle or can be at other positions of the target vehicle. The burglar alarm can be an electronic burglar alarm or a chip type burglar alarm. The domain controller is connected with a plurality of cameras and a plurality of sensors, and can execute functions such as image recognition, data processing and the like. The wireless gateway is used for wireless communication between the domain controller and the server.
From a hardware composition point of view, the mobile terminal may be configured as shown in fig. 2, and includes a processor 210, a memory 220, a display unit 230, and a communication unit 240.
The processor 210 may be a CPU (central processing unit ) or SoC (system on chip), etc., and the processor 210 may be configured to execute various instructions, etc., involved in the method.
The memory 220 may include various volatile memories or nonvolatile memories, such as SSD (solid state disk), DRAM (dynamic random access memory ) memory, and the like. The memory 220 may be used to store pre-stored data, intermediate data, and result data during the display of the reminder information, such as images taken during a preset time period, etc.
The display part 230 may be a separate screen, or a screen, a projector, etc. integrated with the terminal body, and the screen may be a touch screen, or may be a non-touch screen, and the display part is used to display images photographed for a preset period of time, etc.
The communication component 240 may be a wired network connector, a WiFi (wireless fidelity ) module, a bluetooth module, a cellular network communication module, or the like. The communication means may be used for data transmission with other devices, which may be servers, other terminals, etc.
In addition to the processor, memory, the terminal may also include an audio acquisition component, an audio output component, and the like.
The audio capturing component may be a microphone for capturing the voice of the user. The audio output component may be a speaker, earphone, etc. for playing audio.
From a hardware composition perspective, the server may be structured as shown in fig. 3, including a processor 310, a memory 320, and a communication unit 330.
The processor 310 may be a CPU, soC, or the like, and the processor 310 may be configured to execute various instructions, etc., involved in the method.
The memory 320 may include various volatile memory or non-volatile memory, such as SSD, DRAM memory, and the like. The memory 320 may be used to store pre-stored data, intermediate data, and result data of various messages sent by the wireless gateway, for example, images captured during a preset time period, etc.
The communication part 330 may be a wired network connector, a WiFi module, a bluetooth module, a cellular network communication module, etc. The communication means may be used for data transmission with other devices, which may be servers, other terminals, etc. For example, the server transmits an image or the like to the mobile terminal.
In the automotive field, after a driver stops his or her own vehicle in a parking space, it may occur that his or her own vehicle is collided with other persons or other vehicles, resulting in scratch of his or her own vehicle. In this case, the driver may need visual evidence to find the culprit or culprit vehicle so that subsequent economic reimbursements can be obtained.
The embodiment of the application provides a method for monitoring a vehicle according to the application scenario, and the processing flow of the method may be shown in fig. 4, including the following processing steps:
401, when the target vehicle is in a parking and locking state, and the number of times of the periodic detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold, the domain controller wakes up a plurality of cameras.
In practice, the target vehicle is locked after the user parks the target vehicle in a parking space. After the domain controller of the target vehicle recognizes that the target vehicle is flameout and locked, the plurality of ultrasonic sensors are activated. The ultrasonic sensor of the target vehicle emits ultrasonic waves according to a certain period, the ultrasonic waves interact with objects around the target vehicle, the objects reflect the ultrasonic waves again, and the ultrasonic sensor receives the emitted ultrasonic waves again. The ultrasonic sensor can receive ultrasonic waves reflected by different positions of the surface of the object and generate a three-dimensional depth map with a certain resolution. The ultrasonic sensor transmits the generated three-dimensional depth map to the domain controller. And the domain controller of the target vehicle inputs the three-dimensional depth map received in the previous period and the three-dimensional depth map received in the current period into a similarity model to obtain similarity, and records that one change occurs when the similarity is lower than a similarity threshold value. The similarity model may be a machine learning model, for example, a neural network model. When the specified duration is reached, if the recorded number of times is greater than the preset threshold value, it is indicated that there is an object motion around the target vehicle, and it is possible that there is a person moving around the target vehicle, or that other vehicles are backing up or stopping, and these movements may cause the target vehicle to be damaged, and then the domain controller of the target vehicle wakes up a plurality of cameras.
The domain controller controls the plurality of cameras to start capturing images around the target vehicle 402.
In practice, in order to capture a scene around the target vehicle in an all-around manner, all cameras mounted outside the target vehicle may be controlled to start capturing images around the target vehicle. The cameras of the target vehicle can splice the images shot in real time to obtain the aerial view image around the target vehicle.
403, the plurality of cameras transmit the photographed image to the domain controller.
In the implementation, the cameras can respectively send the images shot by the cameras to the domain controller in real time, and can also send the spliced aerial view images to the domain controller in real time.
404, stopping shooting and entering a dormant state when shooting time periods of the cameras reach preset shooting time periods.
The preset photographing time period may be 5 minutes.
In an implementation, when the photographing time period of the plurality of cameras reaches the preset photographing time period, the plurality of cameras may stop photographing and enter the sleep state so as not to waste processing resources and storage resources of the target vehicle.
405, the domain controller inputs images shot by the cameras in a preset shooting time period into the safety recognition model, deletes the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally stores the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sends the images shot in the preset shooting time period to the wireless gateway.
The safety recognition model is a pre-trained machine learning model, and can be a neural network model and the like. In the process of training the safety recognition model, the domain controller of the target vehicle stores some safety samples and dangerous samples in advance, and a user can add images of the user and family around the target vehicle to the safety samples to obtain updated safety samples. Training the safety recognition model by using the difference between the maximized updated safety sample and the maximized updated dangerous sample, and using the safety recognition model which reaches the training end condition as a trained safety recognition model.
In implementation, the domain controller inputs images shot by each camera within a preset shooting time period into a safety recognition model, and the safety recognition model comprehensively judges the safety of the target vehicle through the images shot by all cameras of the target vehicle within the preset shooting time period. If the output result of the safety recognition model is safe, which indicates that the state of the target vehicle is safe, and the images are useless, the images shot in the preset shooting time period can be deleted, so that the storage space of the domain controller is saved. If the output of the safety recognition model is unsafe, indicating that the state of the target vehicle is safe, the images may be used as evidence, and the images photographed within the preset photographing time period may be stored locally. In addition, the images shot in the preset shooting time period can be sent to the wireless gateway so as to remind the user.
And 406, the wireless gateway sends the images shot in the preset shooting time to the server.
The wireless gateway is a forwarding station for wireless communication between the domain controller and the outside.
407, the server stores the images shot in the preset shooting time period and sends the images to the mobile terminal.
In implementation, the server can store the images shot in the preset shooting time length, so that the user can conveniently view and retrieve the images later. In addition, the server can also send the images shot in the preset shooting time to the mobile terminal to remind the user to check the images and confirm whether the state of the target vehicle is safe or not.
408, the mobile terminal displays the prompt message.
The prompt information is used for prompting a driver to view images shot by a plurality of cameras of the target vehicle within a preset duration. The prompt information can be a short message prompt information or a popup prompt information of an application program.
In the implementation, the user can click on a link in the short message prompt message, or click on the popup prompt message of the application program, and after the mobile terminal identifies the click operation of the user, the image is displayed. And the user confirms whether the state of the target vehicle is safe or not through images shot by the cameras within a preset time period. If the state of the target vehicle is safe as a result of the confirmation, the user can delete images shot by the plurality of cameras within a preset time period. If the confirmation result is that the state of the target vehicle is unsafe, the user can go to the parking position of the target vehicle to check the target vehicle. In addition, the user can find out a hit vehicle or a hit person, etc., through images photographed by a plurality of cameras of the target vehicle for a preset period of time.
The embodiment of the application further provides a vehicle monitoring method, and the processing flow of the method may be as shown in fig. 5, including the following processing steps:
501, when the target vehicle is in a parking and locking state, and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the designated plane changes, the domain controller wakes up a plurality of cameras.
The designated plane may be a level ground where the target vehicle is currently located.
In practice, the target vehicle is locked after the user parks the target vehicle in a parking space. The tilt sensor of the target vehicle is activated after the domain controller of the target vehicle recognizes that the target vehicle is extinguished and locked. When the target vehicle collides with another vehicle or is forcefully pushed by a person, the inclination angle of the body of the target vehicle with the designated plane, which is detected by the inclination angle sensor of the target vehicle, may be changed. At this time, the domain controller wakes up a plurality of cameras.
502, the domain controller controls the plurality of cameras to start shooting images around the target vehicle.
503, the plurality of cameras send the photographed images to the domain controller.
504, stopping shooting and entering a dormant state when shooting time periods of the cameras reach preset shooting time periods.
505, the domain controller inputs images shot by the cameras in a preset shooting time period into the safety recognition model, deletes the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally stores the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sends the images shot in the preset shooting time period to the wireless gateway.
And 506, the wireless gateway sends the images shot in the preset shooting time to the server.
507, the server stores the images shot in the preset shooting time period and sends the images to the mobile terminal.
508, the mobile terminal displays the prompt message.
The specific processing of steps 502-508 is similar to steps 402-408, and reference may be made to the relevant descriptions of steps 402-408, which are not repeated here.
The embodiment of the application further provides a vehicle monitoring method, and the processing flow of the method may be as shown in fig. 6, including the following processing steps:
601, an alarm notification is sent to a domain controller by an burglar alarm of a target vehicle in response to a trigger condition.
The burglar alarm may be a fingerprint identifier, which may be mounted on a door handle of a door of a target vehicle.
In practice, the target vehicle is locked after the user parks the target vehicle in a parking space. And after the domain controller of the target vehicle recognizes that the target vehicle is extinguished and locked, starting an burglar alarm of the target vehicle. In the case where the burglar alarm is a fingerprint recognizer, if a person touches a door handle of a target vehicle, the fingerprint recognizer recognizes a fingerprint of the person touching a door. And when the fingerprint information identified by the fingerprint identifier is not matched with the pre-stored fingerprint information, or when the number of times that the fingerprint information identified by the fingerprint identifier is not matched with the pre-stored fingerprint information reaches a preset threshold value, sending an alarm notification to the domain controller.
602, the domain controller receives alarm notification sent by an burglar alarm of a target vehicle and wakes up a plurality of cameras.
603, the domain controller controls the plurality of cameras to start capturing images around the target vehicle.
The plurality of cameras send the captured images to the domain controller 604.
605, stopping shooting and entering a dormant state when shooting time periods of the cameras reach preset shooting time periods.
606, the domain controller inputs images shot by the cameras in a preset shooting time period into the safety recognition model, deletes the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally stores the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sends the images shot in the preset shooting time period to the wireless gateway.
And 607, the wireless gateway sends the images shot in the preset shooting time to the server.
And 608, the server stores the images shot in the preset shooting time period and sends the images to the mobile terminal.
609, the mobile terminal displays the prompt.
The specific processing of steps 603-609 is similar to steps 402-408, and reference may be made to the relevant descriptions of steps 402-408, which are not repeated here.
The three modes of controlling the plurality of cameras to start shooting the images around the target vehicle by the trigger domain controller can be used singly, or can be used in combination, or can be used in all modes.
According to the method provided by the embodiment of the application, when any sensor detects that the object activity exists around the target vehicle in the parking and locking state of the target vehicle, the domain controller of the target vehicle wakes up a plurality of cameras to shoot images. The target vehicle can recognize the photographed image through the safety recognition model, and if the recognition result is unsafe, the domain controller can save the photographed image, upload the photographed image to the server, and inform the user to view. Thus, if the object activity causes damage to the target vehicle, the user may retrieve the captured image from the domain controller or server, and find the hit person or hit vehicle, in order to be economically compensated. In this way, the likelihood of property damage may be reduced.
Based on the same technical concept, the embodiment of the application also provides a vehicle monitoring system, as shown in fig. 1, wherein the vehicle monitoring system comprises a mobile terminal, a server and a target vehicle, the target vehicle is provided with a plurality of cameras, a plurality of ultrasonic sensors, an inclination angle sensor, an burglar alarm, a domain controller and a wireless gateway, and the mobile terminal, the server and the target vehicle are provided with the following components:
the domain controller is used for waking up the cameras when the target vehicle is in a parking and locking state and the number of times of the periodical detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold; controlling a plurality of cameras to start shooting images around a target vehicle;
a plurality of cameras for transmitting the photographed images to the domain controller; stopping shooting and entering a dormant state when shooting time of the cameras reaches preset shooting time;
the domain controller is further used for inputting images shot by the cameras in a preset shooting time period into the safety recognition model, deleting the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally storing the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sending the images shot in the preset shooting time period to the wireless gateway, wherein the safety recognition model is a machine learning model;
The wireless gateway is used for sending the images shot in the preset shooting time to the server;
the server is used for storing the images shot in the preset shooting time and sending the images to the mobile terminal;
and the mobile terminal is used for displaying prompt information, wherein the prompt information is used for prompting a driver to check images shot in a preset time length.
In one possible way, the domain controller is further configured to:
when the target vehicle is in a parking and locking state and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the appointed plane are changed, the plurality of cameras are awakened.
In one possible manner, the domain controller is further configured to:
the domain controller receives an alarm notification sent by an burglar alarm of the target vehicle.
In one possible way, the burglar alarm is a fingerprint identifier;
and the fingerprint identifier is used for sending an alarm notification to the domain controller when the identified fingerprint information is not matched with the pre-stored fingerprint information.
Through the system provided by the embodiment of the application, when any sensor detects that the object activity exists around the target vehicle in the parking and locking state of the target vehicle, the domain controller of the target vehicle wakes up a plurality of cameras to shoot images. The target vehicle can recognize the photographed image through the safety recognition model, and if the recognition result is unsafe, the domain controller can save the photographed image, upload the photographed image to the server, and inform the user to view. Thus, if the object activity causes damage to the target vehicle, the user may retrieve the captured image from the domain controller or server, and find the hit person or hit vehicle, in order to be economically compensated. In this way, the likelihood of property damage may be reduced.
It should be noted that: in the vehicle monitoring system provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the vehicle monitoring system provided in the above embodiment and the method embodiment of vehicle monitoring belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
Fig. 7 shows a block diagram of an electronic device 700 according to an embodiment of the present application. The electronic device may be a mobile terminal in the above-described embodiments. The electronic device 700 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (moving picture experts group audio layer III, motion picture expert compression standard audio plane 3), an MP4 (moving picture experts group audio layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Electronic device 700 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the electronic device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (digital signal processing ), FPGA (field-programmable gate array, field programmable gate array), PLA (programmable logic array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (central processing unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (graphics processing unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 701 may also include an AI (artificial intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the methods provided by embodiments of the present application.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, a positioning assembly 708, and a power supply 707.
A peripheral interface 703 may be used to connect I/O (input/output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The radio frequency circuit 704 is configured to receive and transmit RF (radio frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (wireless fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (near field communication ) related circuitry, which is not limited in this application.
The display screen 705 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, disposed on a front panel of the electronic device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the electronic device 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (liquid crystal display ), OLED (organic light-emitting diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera, and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and VR (virtual reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The location component 708 is operative to locate a current geographic location of the electronic device 700 for navigation or LBS (location based service, location-based services). The positioning component 708 may be a GPS (global positioning system ), beidou system or galileo system based positioning component.
The power supply 709 is used to power the various components in the electronic device 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may collect a 3D motion of the user on the electronic device 700 in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the electronic device 700 and/or at an underlying layer of the display screen 705. When the pressure sensor 713 is disposed at a side frame of the electronic device 700, a grip signal of the user on the electronic device 700 may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be disposed on the front, back, or side of the electronic device 700. When a physical key or vendor Logo is provided on the electronic device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
716, also referred to as distance sensors, are typically provided on the front panel of the electronic device 700. 716 are used to capture the distance between the user and the front of the electronic device 700. In one embodiment, when 716 detects that the distance between the user and the front of the electronic device 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when 716 detects that the distance between the user and the front of the electronic device 700 gradually increases, the processor 701 controls the display 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the electronic device 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an embodiment of the present application, there is also provided a computer readable storage medium, for example, a memory including instructions executable by a processor in a terminal to perform the method of performing the interactive operation in the above embodiment. The computer readable storage medium may be non-transitory. For example, the computer readable storage medium may be a ROM (read-only memory), a RAM (random access memory ), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals (including but not limited to signals transmitted between the user terminal and other devices, etc.) referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the embodiments is provided merely as a partial practical example and is not intended to limit the present application, but any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present application are intended to be included in the scope of the present application.

Claims (10)

1. A method of vehicle monitoring, the method being applied to a vehicle monitoring system comprising a mobile terminal, a server and a target vehicle, the target vehicle having a plurality of cameras, a plurality of ultrasonic sensors, an inclination sensor, an burglar alarm, a domain controller and a wireless gateway, comprising:
when the target vehicle is in a parking and locking state, and the number of times of the periodical detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold, the domain controller wakes up the cameras;
the domain controller controls the cameras to start shooting images around the target vehicle;
the cameras send the shot images to the domain controller;
stopping shooting and entering a dormant state when the shooting time of the cameras reaches the preset shooting time;
the domain controller inputs images shot by the cameras in a preset shooting time period into a safety recognition model, deletes the images shot in the preset shooting time period if the output result of the safety recognition model is safe, locally stores the images shot in the preset shooting time period if the output result of the safety recognition model is unsafe, and sends the images shot in the preset shooting time period to the wireless gateway, wherein the safety recognition model is a machine learning model;
The wireless gateway sends the images shot in the preset shooting time to a server;
the server stores the images shot in the preset shooting time and sends the images to the mobile terminal;
the mobile terminal displays prompt information, wherein the prompt information is used for prompting a driver to check images shot in the preset duration.
2. The method of claim 1, wherein before the domain controller controls the plurality of cameras to begin capturing images around the target vehicle, the method further comprises:
when the target vehicle is in a parking and locking state and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the appointed plane are changed, the domain controller wakes up the cameras.
3. The method of claim 1, wherein prior to the domain controller waking up the plurality of cameras, the method further comprises:
the domain controller receives an alarm notification sent by an burglar alarm of the target vehicle.
4. A method according to claim 3, wherein the burglar alarm is a fingerprint identifier;
Before the domain controller receives the alarm notification sent by the burglar alarm of the target vehicle, the method further comprises:
and when the fingerprint information identified by the fingerprint identifier is not matched with the pre-stored fingerprint information, sending an alarm notification to the domain controller.
5. A vehicle monitoring system comprising a mobile terminal, a server and a target vehicle, the target vehicle having a plurality of cameras, a plurality of ultrasonic sensors, an inclination sensor, an burglar alarm, a domain controller and a wireless gateway, wherein:
the domain controller is used for waking up the cameras when the target vehicle is in a parking and locking state and the number of times of the periodic detection data of any ultrasonic sensor changing within a specified duration exceeds a preset threshold; controlling the cameras to start shooting images around the target vehicle;
the cameras are used for sending the shot images to the domain controller; stopping shooting and entering a dormant state when the shooting time of the cameras reaches the preset shooting time;
the domain controller is further configured to input images captured by the plurality of cameras within a preset capturing period into a security identification model, delete the images captured within the preset capturing period if an output result of the security identification model is secure, locally store the images captured within the preset capturing period if the output result of the security identification model is unsafe, and send the images captured within the preset capturing period to the wireless gateway, where the security identification model is a machine learning model;
The wireless gateway is used for sending the images shot in the preset shooting time to a server;
the server is used for storing the images shot in the preset shooting time and sending the images to the mobile terminal;
the mobile terminal is used for displaying prompt information, wherein the prompt information is used for prompting a driver to check the images shot in the preset time length.
6. The system of claim 5, wherein the domain controller is further configured to:
when the target vehicle is in a parking and locking state and the inclination angle of the body of the target vehicle detected by the inclination angle sensor and the appointed plane are changed, the cameras are awakened.
7. The system of claim 5, wherein the domain controller is further configured to:
the domain controller receives an alarm notification sent by an burglar alarm of the target vehicle.
8. The system of claim 7, wherein the burglar alarm is a fingerprint identifier;
and the fingerprint identifier is used for sending an alarm notification to the domain controller when the identified fingerprint information is not matched with the pre-stored fingerprint information.
9. A set of computer devices, comprising a plurality of computer devices, each computer device comprising a processor and a memory;
the processors of the plurality of computer devices are configured to execute instructions stored in the memories of the plurality of computer devices to cause the group of computer devices to perform the method of any of claims 1-4.
10. A computer readable storage medium comprising computer program instructions which, when executed by a group of computer devices, perform the method of any of claims 1-4.
CN202310091938.8A 2023-01-17 2023-01-17 Method, system, equipment set and storage medium for vehicle monitoring Pending CN116095469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310091938.8A CN116095469A (en) 2023-01-17 2023-01-17 Method, system, equipment set and storage medium for vehicle monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310091938.8A CN116095469A (en) 2023-01-17 2023-01-17 Method, system, equipment set and storage medium for vehicle monitoring

Publications (1)

Publication Number Publication Date
CN116095469A true CN116095469A (en) 2023-05-09

Family

ID=86213805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310091938.8A Pending CN116095469A (en) 2023-01-17 2023-01-17 Method, system, equipment set and storage medium for vehicle monitoring

Country Status (1)

Country Link
CN (1) CN116095469A (en)

Similar Documents

Publication Publication Date Title
CN110126783B (en) Vehicle control method and device
CN111137278B (en) Parking control method and device for automobile and storage medium
CN108550210B (en) Method and apparatus for controlling unlocking state of vehicle
CN111400610B (en) Vehicle-mounted social method and device and computer storage medium
CN109977570B (en) Vehicle body noise determination method, device and storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN109189068B (en) Parking control method and device and storage medium
CN114475520B (en) Control method and device for automobile and computer storage medium
CN113408989B (en) Automobile data comparison method and device and computer storage medium
CN113099378B (en) Positioning method, device, equipment and storage medium
CN116095469A (en) Method, system, equipment set and storage medium for vehicle monitoring
CN113706807B (en) Method, device, equipment and storage medium for sending alarm information
KR20150060422A (en) Electronic Device And Method Of Controlling The Same
CN116001718A (en) Method, system, device group and storage medium for vehicle interior safety monitoring
CN114566064B (en) Method, device, equipment and storage medium for determining position of parking space
CN114419913B (en) In-vehicle reminding method and device, vehicle and storage medium
CN114506383B (en) Steering wheel alignment control method, device, terminal, storage medium and product
CN113903145B (en) Help seeking method, electronic equipment and computer readable storage medium
CN109291879B (en) Method and device for setting automobile anti-theft function and storage medium
CN115959157A (en) Vehicle control method and apparatus
CN116311976A (en) Signal lamp control method, device, equipment and computer readable storage medium
CN114613122A (en) Intelligent vehicle management method and system
CN116834695A (en) Method, apparatus, device and storage medium for removing condensate
CN116338626A (en) Point cloud data denoising method, device, equipment and computer readable storage medium
CN116868869A (en) Spray irrigation control method, system, computing device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination