CN115817345A - Panoramic image system control method, control device, vehicle and storage medium - Google Patents

Panoramic image system control method, control device, vehicle and storage medium Download PDF

Info

Publication number
CN115817345A
CN115817345A CN202211617064.7A CN202211617064A CN115817345A CN 115817345 A CN115817345 A CN 115817345A CN 202211617064 A CN202211617064 A CN 202211617064A CN 115817345 A CN115817345 A CN 115817345A
Authority
CN
China
Prior art keywords
vehicle
signal lamp
state
duration
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211617064.7A
Other languages
Chinese (zh)
Inventor
杨振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202211617064.7A priority Critical patent/CN115817345A/en
Publication of CN115817345A publication Critical patent/CN115817345A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a panoramic image system control method, a control device, a vehicle and a storage medium, wherein the panoramic image system control method comprises the following steps: determining whether the vehicle is in a driving state to be steered; when the vehicle is determined to be in a driving state to be steered, acquiring signal lamp duration corresponding to a signal lamp state; and controlling the panoramic image system of the vehicle to be started according to the signal lamp duration. The method realizes that the panoramic image system of the vehicle is controlled to be started according to the signal lamp duration corresponding to the signal lamp state, is beneficial to a user to acquire the panoramic image around the vehicle according to the panoramic image system, and safely drives the vehicle to pass through the signal lamp intersection, thereby reducing the collision risk of the vehicle and reducing the safety risk in the vehicle running process.

Description

Panoramic image system control method, control device, vehicle and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method and an apparatus for controlling a panoramic image system, a vehicle, and a storage medium.
Background
Along with the continuous development of vehicle intellectuality, more and more vehicles are equipped with panoramic image system, and it can conveniently let the user observe the image around the automobile body in the car, can provide very big facility for the vehicle driving process.
At present, when a vehicle turns to and runs to a signal lamp intersection, in order to ensure the running safety of the vehicle, a user needs to stir a steering poking rod to start a panoramic image system. However, when the vehicle travels to the signal lamp intersection and the user forgets to toggle the steering lever, the panoramic image system is not turned on, the collision risk of the vehicle is increased, and the safety risk in the vehicle traveling process is increased.
Disclosure of Invention
In view of the above, embodiments of the present application provide a panoramic image system control method, a control apparatus, a vehicle and a storage medium, so as to overcome or at least partially solve the above problems in the prior art.
In a first aspect, an embodiment of the present application provides a method for controlling a panoramic image system, including: determining whether the vehicle is in a driving state to be steered; when the vehicle is determined to be in a driving state to be steered, acquiring signal lamp duration corresponding to a signal lamp state; and controlling the panoramic image system of the vehicle to be started according to the duration of the signal lamp.
In some optional embodiments, before the signal lamp duration corresponding to the signal lamp state is obtained, the method for controlling the panoramic image system further includes: acquiring an environment image of an environment where a vehicle is located; determining whether the vehicle is positioned in the range of a signal lamp intersection or not according to the environment image; the method for acquiring the signal lamp duration corresponding to the signal lamp state comprises the following steps: and when the vehicle is determined to be in the range of the signal lamp intersection according to the environment image, acquiring the signal lamp duration corresponding to the signal lamp state.
In some optional embodiments, the panoramic image system control method further includes: when the vehicle is determined not to be in the range of the signal lamp intersection according to the environment image, determining whether the vehicle has collision risk or not; and when the vehicle is determined to have the collision risk, controlling a panoramic image system of the vehicle to be started.
In some optional embodiments, the signal lamp state is a first signal lamp state, the first signal lamp state is used to represent that the signal lamp is in a vehicle passing prohibition state, the duration of the first signal lamp is the remaining duration of the first signal lamp, and acquiring the signal lamp duration corresponding to the signal lamp state includes: acquiring the remaining time length of a first signal lamp corresponding to the state of the first signal lamp; according to the signal lamp duration, the panorama image system of control vehicle opens, includes: and when the remaining duration of the first signal lamp is less than or equal to the duration threshold, controlling a panoramic image system of the vehicle to be started.
In some optional embodiments, the signal lamp state is a second signal lamp state, the second signal lamp state is used to represent that the signal lamp is in a state allowing vehicles to pass through, the duration of the signal lamp is the remaining duration of the second signal lamp, and acquiring the duration of the signal lamp corresponding to the signal lamp state includes: acquiring the remaining time length of a second signal lamp corresponding to the state of the second signal lamp; according to the signal lamp duration, the panoramic image system of control vehicle is opened, include: acquiring the running time of the vehicle from the current position to the pedestrian crossing; and when the remaining duration of the second signal lamp is longer than the driving duration, controlling a panoramic image system of the vehicle to be started.
Wherein, in some optional embodiments, determining whether the vehicle is in a driving state to be steered comprises: determining whether a steering signal of the vehicle is detected, wherein the steering signal is generated based on the toggle operation of a user on a steering poke rod; when the steering signal of the vehicle is detected, determining that the vehicle is in a driving state to be steered; when it is determined that the steering signal of the vehicle is not detected, it is determined that the vehicle is not in the to-be-steered running state.
Wherein, in some optional embodiments, determining whether the vehicle is in a driving state to be steered comprises: acquiring positioning information of a vehicle; determining whether the vehicle is in a steering lane or not according to the navigation map and the positioning information; when the vehicle is determined to be in a steering lane according to the navigation map and the positioning information, determining that the vehicle is in a driving state to be steered; and when the vehicle is determined to be in the straight lane according to the navigation map and the positioning information, determining that the vehicle is not in the driving state to be steered.
In a second aspect, an embodiment of the present application provides a control device for a panoramic image system, including a state determining module, a duration obtaining module, and a duration image control module. The state determining module is used for determining whether the vehicle is in a driving state to be steered; the time length acquisition module is used for acquiring the signal lamp time length corresponding to the signal lamp state when the vehicle is determined to be in the driving state to be steered; and the duration image control module is used for controlling the panoramic image system of the vehicle to be started according to the duration of the signal lamp.
In a third aspect, embodiments of the present application provide a vehicle, including a memory; one or more processors coupled with the memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, and the one or more application programs are configured to execute the panoramic image system control method provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the panoramic image system control method provided in the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when running on a computer device, causes the computer device to execute the panoramic image system control method provided in the first aspect.
The scheme that this application provided, whether be in through confirming the vehicle and wait to turn to the driving state, and when confirming the vehicle is in and wait to turn to the driving state, it is long when the signal lamp that acquires the signal lamp state and correspond, and it is long according to the signal lamp, the panoramic image system of control vehicle opens, it is long to have realized according to the signal lamp that the signal lamp state corresponds, the panoramic image system of control vehicle opens, be favorable to the user to gather the panoramic image around the vehicle according to the panoramic image system, safe driving vehicle passes through the signal lamp crossing, the collision risk of vehicle has been reduced, the safety risk in the vehicle driving process has been reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view illustrating a scene of a panoramic image control system according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a method for controlling a panoramic image system according to an embodiment of the present disclosure.
Fig. 3 is a schematic flow chart illustrating a panoramic image system control method according to an embodiment of the present disclosure.
Fig. 4 is a block diagram illustrating a configuration of a control device of a panoramic image system according to an embodiment of the present application.
FIG. 5 shows a functional block diagram of a vehicle provided by an embodiment of the present application.
Fig. 6 illustrates a computer-readable storage medium storing or carrying program codes for implementing a panoramic image system control method according to an embodiment of the present application.
Fig. 7 illustrates a computer program product provided in an embodiment of the present application for storing or carrying program codes for implementing a panoramic image system control method provided in an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, an application scenario of the panoramic image control system according to the embodiment of the present application is schematically shown, which may include a vehicle 100 and a panoramic image system 200, where the vehicle 100 is communicatively connected to the panoramic image system 200 and performs data interaction with the panoramic image system 200.
The Vehicle 100 may include an Electric Vehicle (e.g., an Electric Vehicle, a battery car, etc.), a Hybrid Vehicle (e.g., a Hybrid Electric Vehicle (HEV)), a fuel-powered Vehicle, a gas-powered Vehicle, etc., and is not limited herein.
Vehicle 100 may include a frame 110 and a Vehicle Control Unit (VCU) 120, VCU 120 and panoramic imaging system 200 may be mounted to frame 110, and frame 110 may provide mounting support for VCU 120 and panoramic imaging system 200.
The VCU 120 may be communicatively coupled to the panoramic image system 200 and may interact with the panoramic image system 200. The VCU 120 may be configured to control the panoramic image system 200 to be turned on or off, the VCU 120 is a core control component of the entire vehicle 100, and corresponds to a brain of the vehicle 100, and the VCU 120 may be configured to collect signals (e.g., a power-on signal, a power-off signal, an accelerator pedal signal, a brake pedal signal, and other component signals), and may control corresponding components to operate according to the collected signals.
The VCU 120 acts as a command management center for the vehicle 100, and its main functions may include: driving torque control, optimal control of braking energy, energy management of the entire vehicle, maintenance and management of a Controller Area Network (CAN), diagnosis and processing of faults, monitoring of vehicle conditions, etc., and thus, the quality of the VCU 120 directly determines the stability and safety of the vehicle 100.
The panoramic image system 200 is composed of a head camera 210, a tail camera 220, a left side camera 230 and a right side camera 240 which are installed on the vehicle 100, and the VCU 120 is in communication connection with the head camera 210, the tail camera 220, the left side camera 230 and the right side camera 240 and performs data interaction with the head camera 210, the tail camera 220, the left side camera 230 and the right side camera 240.
The vehicle head camera 210 is configured to acquire a vehicle head environment image of the vehicle head direction of the vehicle 100, and send the acquired vehicle head environment image to the VCU 120; the vehicle tail camera 220 is configured to collect a vehicle tail environment image of the vehicle 100 in the direction of the vehicle tail, and send the collected vehicle tail environment image to the VCU 120; the left camera 230 is configured to collect a left environment image of the left direction of the vehicle 100, and send the collected left environment image to the VCU 120; the right camera 240 is configured to capture a right environment image of the right direction of the vehicle 100 and transmit the captured right environment image to the VCU 120.
The VCU 120 may be configured to receive the vehicle head environment image sent by the vehicle head camera 210, the vehicle tail environment image sent by the vehicle tail camera 220, the left environment image sent by the left side camera 230, and the right environment image sent by the right side camera 240, and splice the received vehicle head environment image, the vehicle tail environment image, the left environment image, and the right environment image into a vehicle peripheral bird's-eye view.
In some embodiments, the vehicle 100 may further include a central control screen that may be communicatively coupled to the VCU 120 and interact with the VCU 120. The VCU 120 may also be configured to send the stitched overhead view of the vehicle periphery to the central control screen; the central control screen may be configured to receive the bird's-eye view of the periphery of the vehicle sent by the VCU 120, and display the bird's-eye view of the periphery of the vehicle.
In some embodiments, the panoramic image control system may further include a positioning module, and the positioning module may be mounted to the frame 110, and the frame 110 may provide mounting support for the positioning module. The location module may be communicatively coupled to the VCU 120 and interact with the VCU 120.
The positioning module may be configured to obtain positioning information of the vehicle and send the obtained positioning information to the VCU 120. The Positioning module may be a satellite Positioning module (e.g., a Global Positioning System (GPS) module), a signal Positioning module (e.g., a Wireless Fidelity (Wi-Fi) signal Positioning module), and the like, and the Positioning information may be geographical location information.
In some embodiments, the panoramic image control system may further include a map module, and the map module may be mounted to the frame 110, and the frame 110 may provide mounting support for the map module. The map module may be communicatively coupled to the VCU 120 and interact with the VCU 120. The map module may be configured to obtain a navigation map from the map service platform and send the obtained navigation map to the VCU 120.
In some embodiments, the panoramic image control system may further include a Vehicle wireless to X (V2X) module, the V2X module may be mounted to the frame 110, and the frame 110 may provide mounting support for the V2X module. The V2X module may be communicatively coupled to the VCU 120 and interact with the VCU 120. The V2X module may be configured to acquire signal light information of a signal light, and send the acquired signal light information to the VCU 120. The signal light information may include at least any one of signal light color information (e.g., red light information, green light information, yellow light information, etc.) or signal light time length (e.g., red light time length, green light time length, yellow light time length, etc.), etc.
Referring to fig. 2, a flowchart of a method for controlling a panoramic image system according to an embodiment of the present application is shown. In an embodiment, the panoramic image system control method may be applied to the VCU 120 in the panoramic image control system shown in fig. 1, and the flow shown in fig. 2 will be described in detail below by taking the VCU 120 as an example, and the panoramic image system control method may include the following steps S110 to S130.
Step S110: it is determined whether the vehicle is in a to-be-steered running state.
In an embodiment of the present application, the VCU may determine whether the vehicle is in a drive-to-steer state, which may be used to characterize that a user driving the vehicle has a steering intent.
In some embodiments, the VCU may determine whether a steering signal of the vehicle is detected, obtain a first determination result, and may determine whether the vehicle is in a to-be-steered driving state according to the first determination result. The steering signal may be generated based on a toggle operation of the steering lever by the user, and the first determination result may include determining that the steering signal of the vehicle is detected, or determining that the steering signal of the vehicle is not detected, or the like.
When the steering signal of the vehicle is detected, determining that the vehicle is in a driving state to be steered; when it is determined that the steering signal of the vehicle is not detected, it is determined that the vehicle is not in the to-be-steered running state.
The VCU can acquire a vehicle signal of a vehicle, matches the vehicle signal with a preset steering signal, acquires a signal matching degree, and acquires a first determination result according to the signal matching degree. When the signal matching degree is greater than or equal to the signal matching degree threshold value, determining that the steering signal of the vehicle is detected; when the signal matching degree is smaller than the signal matching degree threshold value, it is determined that the steering signal of the vehicle is not detected.
In some embodiments, the VCU may acquire the positioning information of the vehicle, determine whether the vehicle is in a turning lane according to the navigation map and the positioning information, obtain a second determination result, and determine whether the vehicle is in a driving state to be turned according to the second determination result. Wherein the second determination may include determining that the vehicle is in a turn lane according to the navigation map and the positioning information, or determining that the vehicle is in a straight lane according to the navigation map and the positioning information.
When the vehicle is determined to be in a steering lane according to the navigation map and the positioning information, determining that the vehicle is in a driving state to be steered; and when the vehicle is determined to be in the straight lane according to the navigation map and the positioning information, determining that the vehicle is not in the driving state to be steered.
As an embodiment, the panoramic image control system may further include a positioning module and a map module, where the positioning module may be configured to obtain positioning information of the vehicle, and the map module may be configured to obtain a navigation map.
The VCU can send a positioning information acquisition instruction to the positioning module, the positioning module receives and responds to the positioning information acquisition instruction to acquire the positioning information of the vehicle, the acquired positioning information is sent to the VCU, and the VCU receives the positioning information returned by the positioning module.
The VCU can send a map acquisition instruction to the map module, the map module receives and responds to the map acquisition instruction, a corresponding navigation map is acquired from the map service platform, the acquired navigation map is sent to the VCU, and the VCU receives the navigation map returned by the map module.
As an implementation manner, when the VCU acquires the positioning information and the navigation map of the vehicle, the VCU may fuse the navigation map and the positioning information to acquire fused position information, and determine whether the vehicle is located in the turning lane according to the fused position information, so that the driving lane of the vehicle is determined according to the positioning information and the navigation map, the accuracy of determining the driving lane of the vehicle is improved, and the accuracy of controlling the panoramic image system is improved.
The fusion position information comprises first fusion position information used for representing that the positioning information is located in a turning map lane in the navigation map, and second fusion position information used for representing that the positioning information is located in a straight map lane in the navigation map.
When first fusion position information used for representing that the positioning information is located in a turning map lane in a navigation map is obtained, determining that the vehicle is located in the turning lane, and determining that the vehicle is in a driving state to be turned; and when second fusion position information used for representing that the positioning information is in a straight map lane in the navigation map is obtained, determining that the vehicle is in the straight lane, and determining that the vehicle is in a straight driving state.
Step S120: and when the vehicle is determined to be in the driving state to be steered, acquiring the signal lamp duration corresponding to the signal lamp state.
In the embodiment of the application, when the VCU determines that the vehicle is in the driving state to be steered, the VCU can acquire the environment image of the environment where the vehicle is located, determine whether the vehicle is in the range of the signal lamp intersection according to the environment image, determine the signal lamp state of the range of the signal lamp intersection when the VCU determines that the vehicle is in the range of the signal lamp intersection according to the environment image, and acquire the signal lamp duration corresponding to the signal lamp state.
The signal lamp intersection range can be used for representing a collection range of images of signal lamp equipment collected by vehicles, can be a range preset by a user, can also be a range automatically generated by a VCU according to a control process for controlling a panoramic image system for multiple times, and the like, is not limited, and can be specifically set according to actual requirements.
The signal light state may be any one of a first signal light state, a second signal light state, a third signal light state, and the like, the first signal light state may be used for representing that the signal light is in a vehicle passage prohibition state, for example, the first signal light state may be a red light state, the second signal light state may be used for representing that the signal light is in a vehicle passage permission state, for example, the second signal light state may be a green light state, and the third signal light state may be used for representing that the signal light is in a vehicle waiting for passage warning state, for example, the third signal light state may be a yellow light state.
The signal lamp duration may be any one of a first signal lamp remaining duration corresponding to the first signal lamp state, a second signal lamp remaining duration corresponding to the second signal lamp state, or a third signal lamp remaining duration corresponding to the third signal lamp state, for example, the first signal lamp remaining duration may be a red lamp remaining duration corresponding to the red lamp state, the second signal lamp remaining duration may be a green lamp remaining duration corresponding to the green lamp state, and the third signal lamp remaining duration may be a yellow lamp remaining duration.
The type of the signal lamp state and the type of the signal lamp duration are not limited, and the signal lamp state and the signal lamp duration can be set according to actual requirements.
In some embodiments, the VCU stores a pre-trained deep learning network model, which is used to detect signal light devices in the image.
When the VCU determines that the vehicle is in a to-be-steered driving state, an image acquisition instruction can be sent to the head camera, the head camera receives and responds to the image acquisition instruction, an environment image of the environment where the vehicle is located is acquired, the acquired environment image is sent to the VCU, the VCU receives the environment image returned by the head camera, inputs the environment image into the deep learning network model, the deep learning network model receives and responds to the environment image, a signal lamp equipment detection result corresponding to the environment image is output to the VCU, the VCU receives a signal lamp equipment detection result returned by the deep learning network model, and whether the vehicle is in a signal lamp intersection range is determined according to the signal lamp equipment detection result.
The signal lamp equipment detection result comprises a first detection result used for representing that signal lamp equipment is detected and a second detection result used for representing that signal lamp equipment is not detected.
When the VCU receives a first detection result returned by the deep learning network model, determining that the vehicle is positioned in the range of the signal lamp intersection; and when the VCU receives a second detection result returned by the deep learning network model, determining that the vehicle is not in the range of the signal lamp intersection.
The Deep learning network model may be a Convolutional Neural Network (CNN) model, a Deep Belief Network (DBN) model, a Stacked Auto Encoder network (SAE) model, a Recurrent Neural Network (RNN) model, a Deep Neural Network (DNN) model, a Long Short Term Memory (Long Short Term Memory, LSTM) network model, or a Gated Recursive Unit (GRU) model, where the type of the Deep learning network model is not limited, and may be specifically set according to actual requirements.
In some embodiments, the panoramic image control system may further include a service platform, the service platform is connected to the VCU through internet of vehicles and performs data interaction with the VCU through the internet of vehicles, and the service platform stores a pre-trained deep learning network model.
When the VCU determines that the vehicle is located in a turning lane, an image acquisition instruction can be sent to the head camera, the head camera receives and responds to the image acquisition instruction, an environment image of the environment where the vehicle is located is acquired, the acquired environment image is sent to the VCU, the VCU receives the environment image returned by the head camera and sends the environment image to the service platform through the Internet of vehicles, the service platform receives and responds to the environment image, the environment image is input into the deep learning network model, the deep learning network model receives and responds to the environment image, a signal lamp device detection result corresponding to the environment image is output to the service platform, the service platform receives a signal lamp device detection result returned by the deep learning network model and sends the signal lamp device detection result to the VCU through the Internet of vehicles, and the VCU determines whether the vehicle is located in a signal lamp intersection range according to the signal lamp device detection result returned by the service platform.
The service platform may be an independent physical service platform, a service platform cluster or a distributed system formed by a plurality of physical service platforms, a cloud service platform providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), big data, an artificial intelligence platform, and the like, where the type of the service platform is not limited, and the service platform may be specifically set according to actual needs.
The vehicle networking is a large system network which is based on an in-vehicle network, an inter-vehicle network and a vehicle-mounted mobile internet and performs wireless communication and information interaction between vehicles, roads, vehicles, people, vehicles and the internet and the like according to an agreed communication protocol and a data interaction standard, and is an integrated network capable of realizing intelligent traffic management, intelligent dynamic information service and intelligent vehicle control.
In some embodiments, when the VCU determines from the environmental image that the vehicle is within the signal intersection range, a signal light status of the signal intersection range may be determined.
As an embodiment, the deep learning network model may be further configured to detect a display color corresponding to the signal lamp device in the environment image, where the first detection result includes the signal lamp device and the display color corresponding to the signal lamp device, and the display color may be a first display color (for example, red display), a second display color (for example, green display), a third display color (for example, yellow display), or the like.
When the VCU receives the first detection result and determines that the vehicle is in the range of the signal lamp intersection, the VCU can acquire the display color in the first detection result and determine the signal lamp state in the range of the signal lamp intersection according to the display color.
As an example, the first detection result is a signal lamp device and a first display color corresponding to the signal lamp device. When the VCU receives the signal lamp equipment and the first display color corresponding to the signal lamp equipment, and determines that the vehicle is in the signal lamp intersection range, the first display color can be obtained, and the signal lamp state in the signal lamp intersection range is determined to be the first signal lamp state according to the first display color.
As an example, the first detection result is a signal lamp device and a second display color corresponding to the signal lamp device. When the VCU receives the signal lamp device and the second display color corresponding to the signal lamp device, and determines that the vehicle is in the signal lamp intersection range, the VCU can obtain the second display color, and determine that the signal lamp state in the signal lamp intersection range is the second signal lamp state according to the second display color.
As an example, the first detection result is the signal lamp device and a third display color corresponding to the signal lamp device. When the VCU receives the signal lamp device and the third display color corresponding to the signal lamp device, and determines that the vehicle is in the signal lamp intersection range, the VCU can obtain the third display color, and determine that the signal lamp state in the signal lamp intersection range is the third signal lamp state according to the third display color.
As an implementation, the panoramic image control system may further include a V2X module, and V2X may be connected to the signal device lamp controller of the signal lamp device through V2X communication, and perform data interaction with the signal lamp device controller through V2X, and the signal lamp device controller may be configured to control a display color of the signal lamp device.
When the VCU determines that the vehicle is located in the signal lamp intersection range according to the environment image, the color obtaining instruction can be sent to the V2X module, the V2X module receives and responds to the color obtaining instruction, the signal lamp equipment controller can forward the color obtaining instruction through the V2X module, the signal lamp equipment controller receives and responds to the color obtaining instruction, the display color of the signal lamp equipment in the signal lamp intersection range is sent to the V2X module through the V2X module, the V2X module receives the display color sent by the signal lamp equipment controller and returns the display color to the VCU, the VCU receives the display color returned by the V2X module, and the signal lamp state of the signal lamp intersection range is determined according to the display color.
In some embodiments, when the VCU determines the signal light status of the signal light intersection range, the signal light duration corresponding to the signal light status may be obtained.
As an implementation manner, the signal lamp state may be a first signal lamp state, and when the VCU determines that the signal lamp state in the signal lamp intersection range is the first signal lamp state, the remaining time length of the first signal lamp corresponding to the first signal lamp state may be acquired.
As an example, the deep learning network model may be further configured to detect a first remaining signal lamp duration corresponding to a first display color of the signal lamp device in the environment image, and the first detection result is the signal lamp device and the first display color and the first remaining signal lamp duration corresponding to the signal lamp device.
When the VCU receives the first detection result, determines that the vehicle is located in the signal lamp intersection range, and determines that the signal lamp state of the signal lamp intersection range is the first signal lamp state according to the first display color in the obtained first detection result, the remaining time length of the first signal lamp in the first detection result can be obtained.
As an example, the panoramic image control system may further include a V2X module, and the signal lamp device controller may be further configured to control a remaining time period of the display lamp corresponding to the display color of the signal lamp device. When the VCU determines that the signal lamp state in the signal lamp intersection range is the first signal lamp state, the first signal lamp duration acquisition instruction can be sent to the V2X module, the V2X module receives the parallel response first signal lamp duration acquisition instruction, the first signal lamp duration acquisition instruction can be forwarded to the signal lamp equipment controller through the V2X, the signal lamp equipment controller receives and responds to the first signal lamp duration acquisition instruction, the first signal lamp remaining duration corresponding to the first display color is sent to the V2X module through the V2X, the V2X module receives the first signal lamp remaining duration sent by the signal lamp equipment controller, the first signal lamp remaining duration is returned to the VCU, and the VCU receives the first signal lamp remaining duration returned by the V2X module.
As an implementation manner, the signal lamp state may be a second signal lamp state, and when the VCU determines that the signal lamp state in the signal lamp intersection range is the second signal lamp state, the remaining time length of the second signal lamp corresponding to the second signal lamp state may be acquired.
As an example, the deep learning network model may be further configured to detect a second remaining signal lamp duration corresponding to a second display color of the signal lamp device in the environment image, and the first detection result is the signal lamp device and the second display color corresponding to the signal lamp device and the second remaining signal lamp duration.
When the VCU receives the first detection result, determines that the vehicle is located in the signal lamp intersection range, and determines that the signal lamp state of the signal lamp intersection range is the second signal lamp state according to the second display color in the first detection result, the remaining time length of the second signal lamp in the first detection result can be obtained.
As an example, the panoramic image control system may further include a V2X module, when the VCU determines that the signal lamp status in the signal lamp intersection range is the second signal lamp status, the second signal lamp duration obtaining instruction may be sent to the V2X module, the V2X module receives and responds to the second signal lamp duration obtaining instruction in parallel, the V2X module may forward the second signal lamp duration obtaining instruction to the signal lamp equipment controller through the V2X, the signal lamp equipment controller receives and responds to the second signal lamp duration obtaining instruction, the V2X module sends the second signal lamp remaining duration corresponding to the display green to the V2X module, the V2X module receives the second signal lamp remaining duration sent by the signal lamp equipment controller, and returns the second signal lamp remaining duration to the VCU, and the VCU receives the second signal lamp remaining duration returned by the V2X module.
Step S130: and controlling the panoramic image system of the vehicle to be started according to the signal lamp duration.
In the embodiment of the application, when the VCU determines that the vehicle is in the driving state to be steered, after the signal lamp duration corresponding to the signal lamp state is obtained, the panoramic image system of the vehicle can be controlled to be started according to the signal lamp duration, the panoramic image system of the vehicle is controlled to be started according to the signal lamp duration corresponding to the signal lamp state, a user can acquire panoramic images around the vehicle according to the panoramic image system, the vehicle is driven safely to pass through the signal lamp intersection, the collision risk of the vehicle is reduced, and the safety risk in the driving process of the vehicle is reduced.
In some embodiments, the signal lamp state is the first signal lamp state, the signal lamp duration is the first signal lamp remaining duration corresponding to the first signal lamp state, when the VCU determines that the vehicle is in the to-be-steered driving state, after the first signal lamp remaining duration corresponding to the first signal lamp state is obtained, when the first signal lamp remaining duration is less than or equal to the duration threshold, the panoramic image system of the vehicle can be controlled to be turned on, when waiting for passing in the signal lamp intersection range is realized, the panoramic image system is controlled to be turned on according to the first signal lamp remaining duration, the situation that the first signal lamp waiting time of the signal lamp intersection is long can be avoided, the panoramic image system is turned on for a long time and is displayed on the central control screen of the vehicle, the user cannot operate the central control screen, and the driving experience of the user in driving is reduced.
The duration threshold may be a duration preset by a user, or a duration automatically generated by the VCU according to a control process of controlling the panoramic image system for multiple times. For example, the duration threshold may be 5 seconds(s), the duration threshold may also be 3s, the duration threshold may also be 8s, and the like, and the value of the duration threshold is not limited herein, and may be specifically set according to actual requirements.
In an application scenario, the duration threshold is 5s, when the remaining duration of the first signal lamp is less than or equal to 5s, it indicates that the remaining waiting time of the first signal lamp is short, the VCU can control the panoramic image system of the vehicle to be turned on, and when the remaining duration of the first signal lamp is greater than 5s, it indicates that the remaining waiting time of the first signal lamp is long, and the VCU can control the panoramic image system of the vehicle to be turned off.
In some embodiments, the signal lamp state is a second signal lamp state, the signal lamp duration is a second signal lamp remaining duration corresponding to the second signal lamp state, when the VCU determines that the vehicle is in the driving state to be steered, after the second signal lamp remaining duration corresponding to the second signal lamp state is obtained, the driving duration from the current position to the crosswalk of the vehicle can be obtained, and when the second signal lamp remaining duration is greater than or equal to the driving duration, the panoramic image system of the vehicle is controlled to be turned on, so that when the vehicle drives in the signal lamp intersection range, the panoramic image system is controlled to be turned on according to the second signal lamp remaining duration, and the driving safety of the vehicle passing through the signal lamp intersection in the second signal lamp state can be ensured.
According to the scheme provided by the embodiment, whether the vehicle is in the driving state to be steered or not is determined, and when the vehicle is determined to be in the driving state to be steered, the panoramic image system of the vehicle is controlled to be started according to the signal lamp state of the signal lamp, the panoramic image system of the vehicle is controlled to be started according to the signal lamp state, a user can acquire panoramic images around the vehicle according to the panoramic image system, the vehicle can be safely driven to pass through a signal lamp intersection, the collision risk of the vehicle is reduced, and the safety risk in the driving process of the vehicle is reduced.
Please refer to fig. 3, which illustrates a flowchart of a panoramic image system control method according to another embodiment of the present application. In a specific embodiment, the panoramic image system control method may be applied to the VCU 120 in the panoramic image control system shown in fig. 1, and the flow shown in fig. 3 will be described in detail by taking the VCU 120 as an example, and the panoramic image system control method may include the following steps S210 to S250.
Step S210: it is determined whether the vehicle is in a to-be-steered running state.
Step S220: and when the vehicle is determined to be in the driving state to be steered, acquiring the signal lamp duration corresponding to the signal lamp state.
Step S230: and controlling the panoramic image system of the vehicle to be started according to the signal lamp duration.
In this embodiment, the steps S210, S220 and S230 may refer to the content of the corresponding steps in the foregoing embodiments, and are not described herein again.
Step S240: and when the vehicle is determined not to be in the signal lamp intersection range according to the environment image, determining whether the vehicle has collision risk.
In the embodiment, when the VCU determines that the vehicle is not in the range of the signal lamp intersection according to the environment image, a spatial image in the driving direction of the vehicle can be acquired, and whether the vehicle has a collision risk or not can be determined according to the spatial image.
Specifically, when the VCU determines that the vehicle is not located in the range of the signal lamp intersection according to the environment image, the VCU may determine a driving track of the vehicle according to the driving speed, the vehicle gear and the steering angle of the vehicle, and send a spatial image acquisition instruction to the camera associated with the driving direction of the vehicle, the camera receives and responds to the spatial image acquisition instruction, and captures a spatial image in the driving direction of the vehicle to obtain a corresponding spatial image, and sends the spatial image to the VCU, and the VCU receives the spatial image returned by the camera, and determines whether the vehicle has a collision risk according to the driving track and the spatial image.
When the obstacle exists on the driving track of the vehicle according to the driving track and the space image, determining that the vehicle has collision risk; and when the obstacle does not exist on the driving track of the vehicle according to the driving track and the space image, determining that the vehicle does not have collision risk.
The vehicle driving direction may be a forward direction (e.g., a left-turn forward direction or a right-turn forward direction) or a backward direction (e.g., a left-turn backward direction or a right-turn backward direction), and the front camera is associated with the forward direction and the rear camera is associated with the backward direction. The obstacle can be at least one of a running vehicle, a limiter, a stone pier, an ice cream tube, a ground lock, a pedestrian, a bicycle or a guardrail, and the obstacle is not limited in the above and can be specifically arranged according to actual requirements.
Step S250: and when the vehicle is determined to have the collision risk, controlling a panoramic image system of the vehicle to be started.
In this embodiment, when the VCU determines that the vehicle has the collision risk, the panoramic image of the vehicle can be controlled to be turned on, so that the panoramic image system of the vehicle is controlled to be turned on according to the collision risk of the vehicle, a user can acquire the panoramic image around the vehicle according to the panoramic image system, the vehicle is safely driven to avoid the collision objects, and the driving safety of the vehicle in the driving process is improved.
According to the scheme provided by the embodiment, whether the vehicle is in the driving state to be steered is determined, when the vehicle is determined to be in the driving state to be steered, the signal lamp duration corresponding to the signal lamp state is obtained, the panoramic image system of the vehicle is controlled to be started according to the signal lamp duration, when the vehicle is determined not to be in the signal lamp intersection range according to the environment image, whether the vehicle has collision risk is determined, and when the vehicle is determined to have the collision risk, the panoramic image system of the vehicle is controlled to be started, the signal lamp duration corresponding to the signal lamp state is achieved, the panoramic image system of the vehicle is controlled to be started, a user can acquire panoramic images around the vehicle according to the panoramic image system, the vehicle can pass through the signal lamp intersection safely, the collision risk of the vehicle is reduced, and the safety risk in the driving process of the vehicle is reduced.
Further, when the vehicle is determined not to be in the range of the signal lamp intersection and the collision risk of the vehicle is determined, the panoramic image system of the vehicle is controlled to be started, so that a user can acquire panoramic images around the vehicle according to the panoramic image system, the vehicle can be safely driven to avoid colliding objects, and the driving safety of the vehicle in the driving process is improved.
Referring to fig. 4, which shows a panoramic image system control apparatus 300 according to an embodiment of the present application, the panoramic image system control apparatus 300 may be applied to the VCU 120 in the panoramic image control system shown in fig. 1, and the panoramic image system control apparatus 300 shown in fig. 4 will be described below by taking the VCU 120 as an example, and the panoramic image system control apparatus 300 may include a state determining module 310, a duration obtaining module 320, and a duration image control module 330.
The state determination module 310 may be used to determine whether the vehicle is in a to-be-steered driving state; the duration obtaining module 320 may be configured to obtain a signal lamp duration corresponding to a signal lamp status when it is determined that the vehicle is in a driving state to be steered; the duration image control module 330 may be configured to control the panoramic image system of the vehicle to be turned on according to the duration of the signal lamp.
In some embodiments, the panorama image system controlling apparatus 300 may further include an image acquiring module and a range determining module.
The image obtaining module may be configured to, when it is determined that the vehicle is in a driving state to be steered, obtain an environment image of an environment where the vehicle is located before the time length of the signal lamp corresponding to the signal lamp is obtained by the time length obtaining module 320; the range determination module can be configured to determine whether the vehicle is in the signal intersection range based on the environmental image.
In some embodiments, the duration obtaining module 320 may include a first obtaining unit.
The first acquisition unit can be used for acquiring signal lamp duration corresponding to the signal lamp state when the vehicle is determined to be in the signal lamp intersection range according to the environment image.
In some embodiments, the panoramic image system control apparatus 300 may further include a risk determination module and a risk image control module.
The risk determination module can be used for determining whether the vehicle has collision risk when the vehicle is determined not to be in the range of the signal lamp intersection according to the environment image; the risk image control module can be used for controlling a panoramic image system of the vehicle to be started when the vehicle is determined to have collision risk.
In some embodiments, the signal light state may be a first signal light state, the first signal light state may be used to indicate that the signal light is in the vehicle passage prohibition state, the signal light time length may be a remaining time length of the first signal light, and the time length obtaining module 320 may further include a second obtaining unit.
The second obtaining unit may be configured to obtain a remaining time length of the first signal lamp corresponding to the state of the first signal lamp.
In some embodiments, the duration image control module 330 may include a first control unit.
The first control unit can be used for controlling the panoramic image system of the vehicle to be started when the remaining duration of the first signal lamp is less than or equal to the duration threshold.
In some embodiments, the signal lamp status is a second signal lamp status, the second signal lamp status may be used to indicate that the signal lamp is in a vehicle passage-permitted status, the signal lamp duration may be a remaining duration of the second signal lamp, and the duration acquiring module 320 may further include a third acquiring unit.
The third obtaining unit may be configured to obtain a remaining duration of the second signal lamp corresponding to the state of the second signal lamp.
In some embodiments, the time duration image control module 330 may include a fourth obtaining unit and a second control unit.
The fourth obtaining unit may be configured to obtain a driving duration of the vehicle from the current position to the crosswalk; the second control unit may be configured to control the panoramic image system of the vehicle to be turned on when the remaining duration of the second signal lamp is greater than the driving duration.
In some embodiments, the state determination module 310 may include a first determination unit, a second determination unit, and a third determination unit.
The first determination unit may be configured to determine whether a steering signal of the vehicle is detected, and the steering signal may be generated based on a toggle operation of a user on a steering lever; the second determination unit may be configured to determine that the vehicle is in a to-be-steered running state when it is determined that the steering signal of the vehicle is detected; the third determination unit may be configured to determine that the vehicle is not in the driving state to be steered, when it is determined that the steering signal of the vehicle is not detected.
In some embodiments, the state determination module 310 may further include a fifth acquisition unit, a fourth determination unit, a fifth determination unit, and a sixth determination unit.
The fifth acquisition unit may be configured to acquire positioning information of the vehicle; the fourth determination unit may be configured to determine whether the vehicle is in a turn lane based on the navigation map and the positioning information; the fifth determining unit may be configured to determine that the vehicle is in a to-be-steered driving state when it is determined that the vehicle is in the steering lane according to the navigation map and the positioning information; the sixth determination unit may be configured to determine that the vehicle is not in the to-be-steered driving state when it is determined that the vehicle is in the straight lane according to the navigation map and the positioning information.
The scheme that this embodiment provided, whether through confirming the vehicle is in waiting to turn to the driving state, and when confirming the vehicle is in waiting to turn to the driving state, it is long to acquire the signal lamp that the signal lamp state corresponds, and according to the signal lamp length, the panoramic image system of control vehicle is opened, it is long to have realized according to the signal lamp that the signal lamp state corresponds, the panoramic image system of control vehicle is opened, be favorable to the user to gather the panoramic image around the vehicle according to the panoramic image system, the safe driving vehicle passes through the signal lamp crossing, the collision risk of vehicle has been reduced, the safety risk in the vehicle driving process has been reduced.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and reference may be made to the partial description of the method embodiment for relevant points. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 5, a functional block diagram of a vehicle 400 provided by an embodiment of the present application is shown, where the vehicle 400 may include one or more of the following components: memory 410, processor 420, and one or more applications, wherein the one or more applications may be stored in memory 410 and configured to be executed by the one or more processors 420, the one or more applications configured to perform a method as described in the aforementioned method embodiments.
The Memory 410 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 410 may be used to store instructions, programs, code sets, or instruction sets. The memory 410 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as determining whether a driving state is pending for steering, determining a driving state is pending for steering, obtaining a signal light state, obtaining a signal light duration, controlling a vehicle, turning on a panoramic vision system, obtaining an environmental image, determining whether a range is at a signal light intersection, determining that a vehicle is not at a signal light intersection, determining whether a vehicle is at a risk of collision, determining a first signal light remaining duration, obtaining a second signal light remaining duration, obtaining a driving duration, determining whether a turn signal is detected, toggling a turn lever, generating a turn signal, determining that a turn signal is detected, determining that a turn signal is not detected, determining that a driving state is pending for steering, obtaining positioning information, determining whether a turn lane is at, determining a turn lane is in a straight lane, and the like), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the vehicle 400 in use (such as vehicle, driving state to be steered, signal lights, signal light state, signal light duration, panoramic vision system, environmental image, signal light intersection range, signal light duration, collision risk, first signal light state, first signal light remaining duration, vehicle passage prohibition state, duration threshold, second signal light state, vehicle passage permission state, second signal light remaining duration, current position, crosswalk, driving duration, steering signal, steering lever, positioning information, navigation map, steering lane, and straight lane), and the like.
Processor 420 may include one or more processing cores. The processor 420 interfaces with various components throughout the vehicle 400 using various interfaces and lines to perform various functions of the vehicle 400 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 410 and invoking data stored in the memory 410. Alternatively, the processor 420 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 420 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 420, but may be implemented by a communication chip.
Referring to fig. 6, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 500 has stored therein a program code 510, and the program code 510 can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 500 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 500 includes a non-volatile computer-readable storage medium. The computer readable storage medium 500 has storage space for program code 510 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 510 may be compressed, for example, in a suitable form.
Referring to fig. 7, a block diagram of a computer program product 600 according to an embodiment of the present application is shown. The computer program product 600 includes computer programs/instructions 610, the computer programs/instructions 610 being stored in a computer readable storage medium of a computer device. When the computer program product 600 is run on a computer device, a processor of the computer device reads the computer program/instructions 610 from the computer-readable storage medium, and the processor executes the computer program/instructions 610, so that the computer device performs the method described in the above-described method embodiments.
The scheme that this embodiment provided, whether through confirming the vehicle is in waiting to turn to the driving state, and when confirming the vehicle is in waiting to turn to the driving state, it is long to acquire the signal lamp that the signal lamp state corresponds, and according to the signal lamp length, the panoramic image system of control vehicle is opened, it is long to have realized according to the signal lamp that the signal lamp state corresponds, the panoramic image system of control vehicle is opened, be favorable to the user to gather the panoramic image around the vehicle according to the panoramic image system, the safe driving vehicle passes through the signal lamp crossing, the collision risk of vehicle has been reduced, the safety risk in the vehicle driving process has been reduced.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A panoramic image system control method is characterized by comprising the following steps:
determining whether the vehicle is in a driving state to be steered;
when the vehicle is determined to be in a driving state to be steered, acquiring signal lamp duration corresponding to a signal lamp state;
and controlling a panoramic image system of the vehicle to be started according to the signal lamp duration.
2. The method for controlling a panoramic image system according to claim 1, wherein before the obtaining of the signal lamp duration corresponding to the signal lamp status, the method further comprises:
acquiring an environment image of an environment where the vehicle is located;
determining whether the vehicle is in the range of a signal lamp intersection or not according to the environment image;
the acquiring of the signal lamp duration corresponding to the signal lamp state comprises the following steps:
and when the vehicle is determined to be in the range of the signal lamp intersection according to the environment image, acquiring the signal lamp duration corresponding to the signal lamp state.
3. The panoramic image system control method of claim 2, further comprising:
when the vehicle is determined not to be in the signal lamp intersection range according to the environment image, determining whether the vehicle has collision risks;
and when the vehicle is determined to have the collision risk, controlling a panoramic image system of the vehicle to be started.
4. The panoramic image system control method of claim 1, wherein the signal lamp state is a first signal lamp state, the first signal lamp state is used for representing that a signal lamp is in a vehicle passing prohibition state, the signal lamp duration is a first signal lamp remaining duration, and the acquiring of the signal lamp duration corresponding to the signal lamp state comprises:
acquiring the remaining time length of the first signal lamp corresponding to the state of the first signal lamp;
the control according to signal lamp duration, the panorama image system of vehicle is opened, include:
and when the remaining duration of the first signal lamp is less than or equal to the duration threshold, controlling a panoramic image system of the vehicle to be started.
5. The panoramic image system control method according to claim 1, wherein the signal lamp state is a second signal lamp state, the second signal lamp state is used for representing that a signal lamp is in a state allowing vehicles to pass through, the signal lamp duration is a remaining duration of the second signal lamp, and the acquiring of the signal lamp duration corresponding to the signal lamp state includes:
acquiring the remaining time length of the second signal lamp corresponding to the state of the second signal lamp;
the control according to signal lamp duration, the panorama image system of vehicle is opened, include:
acquiring the running time of the vehicle from the current position to the pedestrian crossing;
and when the remaining duration of the second signal lamp is greater than the driving duration, controlling a panoramic image system of the vehicle to be started.
6. The panoramic image system control method according to any one of claims 1 to 5, wherein the determining whether the vehicle is in a to-be-steered driving state includes:
determining whether a steering signal of a vehicle is detected, wherein the steering signal is generated based on the toggle operation of a user on a steering poke rod;
when the steering signal of the vehicle is detected, determining that the vehicle is in a driving state to be steered;
when it is determined that the steering signal of the vehicle is not detected, it is determined that the vehicle is not in the to-be-steered running state.
7. The panoramic image system control method according to any one of claims 1 to 5, wherein the determining whether the vehicle is in a to-be-steered driving state includes:
acquiring positioning information of a vehicle;
determining whether the vehicle is in a steering lane according to a navigation map and the positioning information;
when the vehicle is determined to be in a steering lane according to the navigation map and the positioning information, determining that the vehicle is in a driving state to be steered;
and when the vehicle is determined to be in a straight lane according to the navigation map and the positioning information, determining that the vehicle is not in a driving state to be steered.
8. A panoramic image system control device is characterized by comprising:
the state determining module is used for determining whether the vehicle is in a driving state to be steered;
the time length acquisition module is used for acquiring the signal lamp time length corresponding to the signal lamp state when the vehicle is determined to be in the driving state to be steered;
and the duration image control module is used for controlling the opening of the panoramic image system of the vehicle according to the duration of the signal lamp.
9. A vehicle, characterized by comprising:
a memory;
one or more processors coupled with the memory;
one or more applications stored in the memory and configured to be executed by one or more processors, the one or more applications configured to perform the panoramic imagery system control method of any one of claims 1 to 7.
10. A computer-readable storage medium having program code stored therein, the program code being invoked by a processor to perform the panoramic image system control method according to any one of claims 1 to 7.
CN202211617064.7A 2022-12-15 2022-12-15 Panoramic image system control method, control device, vehicle and storage medium Pending CN115817345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211617064.7A CN115817345A (en) 2022-12-15 2022-12-15 Panoramic image system control method, control device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211617064.7A CN115817345A (en) 2022-12-15 2022-12-15 Panoramic image system control method, control device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115817345A true CN115817345A (en) 2023-03-21

Family

ID=85547512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211617064.7A Pending CN115817345A (en) 2022-12-15 2022-12-15 Panoramic image system control method, control device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115817345A (en)

Similar Documents

Publication Publication Date Title
CN111880533B (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN108802761B (en) Method and system for laser radar point cloud anomaly
US10176720B2 (en) Auto driving control system
CN108025767B (en) System and method for providing driving assistance for safe overtaking
CN109515434B (en) Vehicle control device, vehicle control method, and storage medium
JP7139717B2 (en) VEHICLE COMMUNICATION DEVICE, VEHICLE COMMUNICATION METHOD, AND CONTROL PROGRAM
CN104115198B (en) Vehicle collaborates accessory system and method
JP6935800B2 (en) Vehicle control devices, vehicle control methods, and moving objects
CN111583711B (en) Behavior control method and behavior control device
CN111619566B (en) Vehicle control device, vehicle control method, vehicle, and storage medium
CN112644511A (en) Intelligent upgrade strategy for autonomous vehicles
CN113228135B (en) Blind area image acquisition method and related terminal device
US20230063930A1 (en) Vehicle recording device and information recording method
US20220073104A1 (en) Traffic accident management device and traffic accident management method
CN113022441A (en) Vehicle blind area detection method and device, electronic equipment and storage medium
CN115817345A (en) Panoramic image system control method, control device, vehicle and storage medium
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
CN113393702B (en) Driving assistance system, corresponding vehicle, method, computer device and medium
CN113386756A (en) Vehicle follow-up running system, vehicle control device, vehicle, and vehicle control method
CN115140039A (en) Driving support device
CN112654547A (en) Driving reminding method, device and system
CN112758099A (en) Driving assistance method and device, computer equipment and readable storage medium
CN114511834A (en) Method and device for determining prompt information, electronic equipment and storage medium
CN115662190B (en) Prompt message processing method and device for vehicle based on road abnormal state recognition
CN213502263U (en) Automatic control device for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination