CN107305561B - Image processing method, device and equipment and user interface system - Google Patents

Image processing method, device and equipment and user interface system Download PDF

Info

Publication number
CN107305561B
CN107305561B CN201610251255.4A CN201610251255A CN107305561B CN 107305561 B CN107305561 B CN 107305561B CN 201610251255 A CN201610251255 A CN 201610251255A CN 107305561 B CN107305561 B CN 107305561B
Authority
CN
China
Prior art keywords
image
geographic
shooting
geographic position
geographical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610251255.4A
Other languages
Chinese (zh)
Other versions
CN107305561A (en
Inventor
胡蓉
史徐华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Network Technology Co Ltd
Original Assignee
Zebra Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebra Network Technology Co Ltd filed Critical Zebra Network Technology Co Ltd
Priority to CN201610251255.4A priority Critical patent/CN107305561B/en
Priority to PCT/CN2017/080545 priority patent/WO2017181910A1/en
Priority to TW106112956A priority patent/TW201741630A/en
Publication of CN107305561A publication Critical patent/CN107305561A/en
Application granted granted Critical
Publication of CN107305561B publication Critical patent/CN107305561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The application provides an image processing method, device, equipment and a user interface system. The method comprises the following steps: acquiring a first image corresponding to a current scene; acquiring a first geographical position corresponding to the first image, and associating the first image with the first geographical position; adding a first marker on the first geographic position of the map for indicating that the first geographic position is associated with the first image. According to the method and the device, when the user confirms that the geographical position is shot, the operation complexity is low.

Description

Image processing method, device and equipment and user interface system
Technical Field
The present application relates to internet technologies, and in particular, to a method, an apparatus, a device, and a user interface system for processing an image.
Background
The automobile data recorder is an instrument for recording relevant information such as images and sounds during the driving of a vehicle, and not only can provide evidences for traffic accidents, but also can record landscapes and the like during the traveling of a user. Therefore, more and more users use the automobile data recorder during the driving of the vehicle.
In the prior art, the automobile data recorder generally adopts a cyclic shooting mode and/or an interval shooting mode to shoot, so that a user does not know which geographic positions are shot and stored by the automobile data recorder. When a user finds a interest in a geographical position on the way, the user needs to know whether the automobile data recorder shoots the geographical position, the user needs to playback images stored by the automobile data recorder, and then the user determines whether the automobile data recorder shoots the geographical position by browsing the played images. Specifically, the user needs to start the automobile data recorder, then start the playback function of the automobile data recorder, and perform fast forward operation or rewind operation on the playback picture to determine whether the automobile data recorder shoots the geographic position.
However, when the user knows whether the automobile data recorder shoots a geographic location, the user needs to perform cumbersome operations on the automobile data recorder, which results in higher complexity of user operations.
Disclosure of Invention
The application provides an image processing method, an image processing device, image processing equipment and a user interface system, and aims to solve the problem that when a user confirms that a geographical position is shot, the operation complexity of the user is high.
In a first aspect, the present application provides a method for processing an image, including:
acquiring a first image corresponding to a current scene;
acquiring a first geographical position corresponding to the first image, and associating the first image with the first geographical position;
adding a first marker on the first geographic position of the map for indicating that the first geographic position is associated with the first image.
As an implementation manner, after the acquiring the first image corresponding to the current scene, the method further includes:
and displaying at least one first image at a preset position of a map interface displayed on the display screen.
After the adding of the first marker at the first geographic location of the map, further comprising:
displaying the first marker at the first geographic location of a map interface displayed by a display screen.
As an achievable way, after displaying the first marker at the first geographic location of the map interface displayed on the display screen, the method further comprises:
receiving a first instruction triggered by the operation of the first mark by a user;
and displaying a content display interface on the map interface displayed by the display screen according to the first instruction, wherein at least one first image associated with the first geographic position is displayed in the content display interface.
According to the method and the device, the first image is displayed at the preset position, or the first image is displayed on the content display interface after the first instruction is received, so that a user can quickly view at least one first image associated with the first geographical position without performing complicated operation. Further, since the user can view the first image in time, the user can also perform a deletion operation for the first image that is not needed by the user, thereby saving the storage space.
As an achievable way, after displaying a content display interface on the map interface displayed by the display screen, the method further includes:
receiving a second instruction triggered by the user operating the content display interface;
and sending the first geographic position and at least one first image associated with the first geographic position to network equipment according to the second instruction, so that the network equipment can share the at least one first image.
According to the method and the device, the second instruction triggered by the user operation content display interface is received, namely, the user only needs to perform simple interaction with the display screen, the first image can be shared with other users, and the complexity of user operation is reduced.
As an implementation manner, after the sending the first geographic location and the at least one first image associated with the first geographic location to the network device, the method further includes:
replacing the first mark with a second mark, the second mark being used to characterize that the first image has been shared.
As an implementable manner, the method further comprises:
receiving a third instruction triggered by the operation of a preset user interface by a user;
and sending the acquired first images meeting the preset sharing condition and the first geographical positions associated with the first images to network equipment according to the third instruction, so that the network equipment can share the first images.
As an achievable way, the first image satisfying the preset sharing condition includes:
at least one first image associated with a first mark displayed on a map interface displayed on a display screen; or
At least one first image having a location within a first geographic range is captured.
As an implementable manner, the method further comprises:
receiving a second image sent by a server and a second geographic position associated with the second image;
and adding a third mark on the second geographic position of the map, wherein the third mark is used for representing that the second geographic position is associated with the second image.
As an implementation manner, before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic location;
correspondingly, the second image is an image shot by other terminal equipment at the second geographic position;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image is an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image is an image shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image
As an achievable way, after adding a third mark on the second geographic position of the map, the method further comprises:
receiving a fourth instruction triggered by the user operation of the displayed third mark;
and displaying a content display interface on the display screen according to the fourth instruction, wherein at least one second image associated with the second geographic position is displayed in the content display interface.
The method comprises the steps of receiving a second image and a second geographic position associated with the second image sent by a server; add the third mark on the second geographical position of map for the representation second geographical position is associated with the second image, makes the user can know which interesting places second geographical position has, provides the reference for user's trip and tourism, makes the user can be in a trip, does not miss scenery and interesting thing etc. on the way of traveling.
As an achievable way, before the acquiring the first image corresponding to the current scene, the method further includes:
determining that a shooting function of the camera equipment needs to be started;
sending a fifth instruction to the image pickup device to instruct the image pickup device to perform shooting;
the acquiring of the first image corresponding to the current scene includes:
and acquiring a first image obtained by shooting the current scene by the camera equipment.
As an implementable manner, the determining that the shooting function of the image capturing apparatus needs to be started includes:
receiving sensing data sent by sensing equipment, and determining that a shooting function of the camera equipment needs to be started according to the sensing data; or
Receiving shooting information sent by a server, wherein the shooting information comprises a geographical position to be shot, and when the geographical position of a vehicle is the geographical position to be shot, determining that the shooting function of the camera equipment needs to be started; or
Receiving a voice signal input by a user, and determining that a shooting function of the camera equipment needs to be started according to the voice signal; or
Receiving a third image, carrying out image analysis on the third image, determining that the current scene is a preset scene according to an image analysis result, and determining that the shooting function of the camera equipment needs to be started; or
And receiving a control signal triggered by a user through hardware equipment, and determining that the shooting function of the camera equipment needs to be started according to the control signal.
This application is through when sensing data is unusual, confirms that the shooting function that needs start camera equipment can guarantee that when the vehicle received collision or violent vibrations, camera equipment can in time take notes current scene, provides the evidence for events such as the traffic accident that the vehicle met.
The shooting function of camera equipment is confirmed to start through the mode of pronunciation to this embodiment, and the user need not operate with the hand when acquireing oneself required image, has liberated user's both hands for the user can be attentive to drive, has improved the security of driving.
According to the method and the device, the shooting position is obtained from the server, or the shooting position is obtained through image analysis, the scenery on the way can be actively recorded for the user, the attention of a vehicle owner is not disturbed, the user does not need to carry out shooting operation on the scenery on the way, and valuable moments and scenery on the way can be obtained.
In a second aspect, the present application provides a method for processing an image, including:
receiving a first geographical position sent by terminal equipment;
determining a first image, wherein the first image is an image shot by other terminal equipment in a first geographical range corresponding to the first geographical position;
and sending the first image and a second geographic position associated with the first image to the terminal equipment, wherein the second geographic position is a shooting position of the first image.
According to the method and the device, the first image shot by other terminal equipment in the first geographic range corresponding to the first geographic position is determined, and the first image and the second geographic position of the shot position of the first image are sent to the terminal equipment, so that a user using the terminal equipment can conveniently and quickly know the first image shared by users around the current geographic position and the second geographic position associated with the first image. Meanwhile, the first image and the second geographic position can provide reference for the user to go out or play in time, and interestingness of the user to go out is improved.
As an implementable manner, before the determining the first image, further comprising:
receiving the first image and a second geographic position associated with the first image sent by other terminal equipment;
associating the first image with the second geographic location.
As an implementable manner, the method further comprises:
determining a third geographical position meeting a preset shooting condition in a second geographical range corresponding to the first geographical position;
and sending shooting information to the terminal equipment, wherein the shooting information comprises the third geographical position, and the shooting information is used for indicating that the shooting function of the camera equipment needs to be started when the vehicle is located at the third geographical position.
In this embodiment, the third geographic position meeting the preset shooting condition is determined in the second geographic range corresponding to the first geographic position, and the shooting information including the third geographic position and having the shooting value is sent to the terminal device, so that the terminal device can realize automatic shooting, and the user does not miss nice moments.
As an achievable way, before determining a third geographic location meeting a preset shooting condition in a second geographic range corresponding to the first geographic location, the method further includes:
receiving a second image sent by other terminal equipment and a fourth geographic position associated with the second image, wherein the second image is an image shot by the other terminal equipment in a second geographic range corresponding to the first geographic position, and the fourth geographic position is a shooting position of the second image;
determining a third geographical position meeting a preset shooting condition in a second geographical range corresponding to the first geographical position, wherein the third geographical position comprises the following steps:
according to the second images sent by other terminal equipment and fourth geographic positions related to the second images, shooting information of the fourth geographic positions is determined;
and determining a third geographical position meeting preset shooting conditions in the fourth geographical positions according to the shot information of the fourth geographical positions.
As an implementable manner, the captured information includes: the photographed information includes: the frequency of shooting each fourth geographical position within a preset time period;
the preset shooting condition is that the frequency of being shot is greater than a preset value, or the frequency of being shot is before a preset ranking.
In the embodiment, the second images shot by other terminal devices and the associated fourth geographic positions are analyzed in the second geographic range corresponding to the first geographic position, the third geographic positions with a large number of shot images are extracted, valuable shooting positions can be obtained, and shooting information including the third geographic positions with shooting values is sent to the terminal devices, so that the terminal devices can automatically shoot without missing good moments.
In a third aspect, the present application provides an image processing apparatus, where the functions of the apparatus may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions. Specifically, the apparatus includes:
the input module is used for acquiring a first image corresponding to a current scene;
the association module is used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the marking module is used for adding a first mark on the first geographic position of the map and is used for representing that the first geographic position is associated with the first image.
In a fourth aspect, the present application provides an image processing apparatus, where the functions of the apparatus can be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions. Specifically, the apparatus includes:
the input module is used for receiving a first geographical position sent by the terminal equipment;
the processing module is used for determining a first image, wherein the first image is an image shot by other terminal equipment in a first geographical range corresponding to the first geographical position;
and the output module is used for sending the first image and a second geographic position associated with the first image to the terminal equipment, wherein the second geographic position is the shooting position of the first image.
In a fifth aspect, the present application provides an image processing apparatus, comprising: an input device and a processor;
the input device is used for acquiring a first image corresponding to a current scene;
the processor is coupled to the input device and used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the processor is further configured to add a first marker to the first geographic location of the map, for indicating that the first geographic location is associated with the first image.
In a sixth aspect, the present application provides an image processing apparatus, comprising: an input device, a processor, and an output device;
the input device is used for receiving a first geographical position sent by the terminal device;
the processor is coupled to the input device and the output device and used for determining a first image, wherein the first image is an image shot by other terminal devices in a first geographic range corresponding to the first geographic position;
the output device is configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
In a seventh aspect, the present application provides an apparatus for image processing of a vehicle, comprising: an onboard input device and an onboard processor;
the airborne input equipment is used for acquiring a first image corresponding to the current scene;
the onboard processor is coupled to the onboard input device and used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the onboard processor is further configured to add a first marker to the first geographic location of the map, for indicating that the first geographic location is associated with the first image.
In an eighth aspect, the present application provides a user interface system, comprising:
the display component is used for displaying a map interface;
the processor is used for triggering the display component to display a first mark on a first geographic position of the map interface, and is used for representing that the first geographic position is associated with the first image.
In a ninth aspect, the present application provides an in-vehicle internet operating system, comprising:
the image control unit is used for controlling the vehicle-mounted input equipment to acquire a first image corresponding to the current scene;
and the association control unit is used for acquiring a first geographical position corresponding to the first image, acquiring a map added with a first mark at the first geographical position and used for representing that the first geographical position is associated with the first image, wherein the map added with the first mark is obtained by adding the first mark at the first geographical position of the original map.
According to the image processing method, the image processing device, the image processing equipment and the user interface system, after the first image corresponding to the current scene is obtained, the first geographical position corresponding to the first image is obtained, namely the specific shooting position of the first image is determined. The first image is then associated with the first geographic location so that the user can quickly view the first image associated with the first geographic location when subsequently viewing the first geographic location. After the association relationship between the first image and the first geographic position is established, the first mark is added to the first geographic position of the map, and the first image corresponding to the current scene is associated with the first geographic position, so that a user can directly know that the camera shooting device shoots the first geographic position by watching the first mark, the complicated operation of the user is avoided, and the complexity of the user operation is reduced. Moreover, since the specific shooting position is provided, the user is not required to identify the specific shooting position according to buildings or roads in the image, and the efficiency of determining the first geographical position by the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an alternative networking approach of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a user interface state provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface state provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a user interface state provided in an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a user interface state provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application;
fig. 11 is a schematic signaling flow diagram of a method for processing an image according to an embodiment of the present application;
fig. 12 is a signaling flowchart of a method for processing an image according to an embodiment of the present application;
fig. 13 is a signaling flowchart of a method for processing an image according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 18 is a schematic hardware configuration diagram of an image processing apparatus according to an embodiment of the present application;
fig. 19 is a schematic hardware configuration diagram of an image processing apparatus according to an embodiment of the present application;
fig. 20 is a schematic hardware configuration diagram of an image processing apparatus according to an embodiment of the present application;
fig. 21 is a schematic hardware structure diagram of an image processing apparatus according to an embodiment of the present application.
FIG. 22 is a schematic structural diagram of a user interface system provided in an embodiment of the present application;
fig. 23 is a schematic structural diagram of a vehicle-mounted internet operating system according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The application provides a method for processing images, which can be applied to the field of vehicle driving. Taking a vehicle as an example, the vehicle may be a single oil-circuit vehicle, a single steam-circuit vehicle, an oil-gas combined vehicle, or a power-assisted electric vehicle, the type of the vehicle is not limited in the embodiment of the present application, and the vehicle has a corresponding vehicle-mounted system.
Specifically, during the driving of the vehicle, the camera device can be controlled to shoot in time, and images shot by the camera device can be acquired. And after the image is acquired, the image and the geographic position where the image is shot are stored in a correlated manner. And then, a mark is added on the geographical position on the map interface, so that a user can intuitively and quickly know that the camera shooting device shoots the geographical position through the mark without any operation. Optionally, an image associated with the geographic location may also be displayed on the map interface displayed on the display screen, so that the user may view the image captured by the camera device at the geographic location. Furthermore, when the user wants to share the image with other users after watching the image, the user can also obtain an instruction triggered by the operation of the display screen of the user, and the image is sent to the network equipment according to the instruction, so that the image can be quickly and conveniently shared with other users.
The execution main body of the embodiment may be an image processing device, and the device may be implemented by hardware, or may be implemented by hardware executing corresponding software, and may also be implemented in an infrastructure of a vehicle or a terminal device. When the apparatus performs the embodiments shown in fig. 2 to 10 as follows, the apparatus may be implemented in an infrastructure of terminal devices including a mobile terminal, a vehicle-mounted device, and the like. The mobile terminal can be a mobile phone and a tablet, and the vehicle-mounted device can be a vehicle recorder, a vehicle machine, a center console, a navigation device and the like. When the apparatus performs the embodiments shown in fig. 11 to 13 below, the apparatus may be implemented in an infrastructure of a server.
The networking method and the specific implementation of the present application will be described in detail below with reference to a vehicle as an example, and the image processing apparatus implemented in a car machine or a server as an execution subject.
Fig. 1 is a schematic diagram of an alternative networking approach of the present application. The method for processing the image can be realized through the networking. As shown in fig. 1, the networking includes: the car machine 101, the camera device 102 and the positioning module 103.
The car machine 101 refers to a vehicle-mounted infotainment product installed inside a car. The car machine 101 is mostly installed in a center console of the car, and a host of the car machine 101 may be integrally disposed with the display screen or may be separately disposed from the display screen. The car machine 101 can functionally realize information communication between a person and a car, and between the car and the outside. The display screen of the vehicle machine can display a navigation path, a driving path and the like.
The camera device 102 may be a camera installed at any position of the vehicle, may also be a vehicle data recorder, and may also be a terminal device such as a mobile phone and a tablet having a camera function, that is, the camera device 102 is any device having a camera function. After receiving an instruction for instructing shooting sent by the vehicle machine 101, the camera device 102 shoots a current scene, and sends a first image obtained by shooting to the vehicle machine 101.
The Positioning module 103 may be a Global Positioning System (GPS), a BeiDou Navigation Satellite System (BDS), or the like. The location module 103 may be self-contained in the vehicle or may be provided by other external devices. The positioning module 103 is used for providing position information to the car machine 101.
After acquiring the first image shot by the camera device 102, the car machine 101 acquires a first geographic position corresponding to the current scene from the positioning module 103. The car machine 101 processes the first image in conjunction with the first geographic location. For example, a first image is associated with a first geographic location, and then a first marker is added to the first geographic location of the map to indicate that the first geographic location is associated with the first image. Other processing procedures performed by the car machine 101 on the first image will be described in detail in the following embodiments.
The networking shown in fig. 1 may further include a sensing device 104. The sensing device 104 may send sensing data to the car machine 101 in real time. The in-vehicle machine 101 may determine that a shooting function of the image pickup apparatus needs to be started according to the sensing data. Those skilled in the art will understand that the way of the car machine starting the shooting function is only one possible implementation way of the car machine 101 when it is determined that the shooting function of the image capturing device needs to be started, and as for other possible implementation ways, the following embodiments will be described in detail.
On the basis of the above embodiment, the networking shown in fig. 1 may further include a server 105. The server 105 may receive the first image sent by the car machine 101, and share the first image with other users. Similarly, the server 105 may also send a second image shared by another user to the car machine 101.
Those skilled in the art will appreciate that this networking is merely exemplary. The physical devices in the network may also be replaced by other devices. For example, the sensing device 104 may be other detection devices as long as the detection devices can send data to the vehicle machine 101, which may indicate that the vehicle is collided or the vehicle is violently shocked. For possible implementation manners of the networking, details of this embodiment are not described herein again. The following describes in detail the image processing method provided in the present application, taking the networking shown in fig. 1 as an example.
Fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present application. As shown in fig. 2, the process includes:
step 201, acquiring a first image corresponding to a current scene.
The vehicle-mounted device obtains a first image obtained by shooting a current scene by the camera device. An implementation of the image pickup apparatus may refer to the embodiment shown in fig. 1. The first image comprises a photograph and/or a video. Alternatively, the user may preset specific content included in the first image, for example, the user preset the camera device to capture a picture and a video at the same time.
In a possible embodiment, the car machine may acquire the first image according to a preset period. Specifically, the car machine may obtain a preset period input by a user, then the car machine sends the preset period to the camera device, and the camera device sends a first image shot of a current scene in one period to the car machine according to the preset period. For example, if the preset period is 5 minutes, the image capturing apparatus may send the first image captured by the image capturing apparatus in the past 5 minutes to the in-vehicle machine every 5 minutes.
In another possible embodiment, after determining that the shooting function of the image pickup apparatus needs to be started, the in-vehicle device sends a fifth instruction to the image pickup apparatus to instruct the image pickup apparatus to shoot. The fifth instruction is used to instruct the image capturing apparatus to capture an image, and is referred to as a capture instruction in the following embodiments. And then the vehicle machine receives a first image obtained by shooting the current scene by the camera equipment. Specifically, the car machine sends a shooting instruction to the camera device, wherein the shooting instruction comprises a shooting mode, such as shooting a picture or shooting a video, or shooting the picture and the video at the same time. When the shooting mode is to shoot a video, the shooting instruction may further include a shooting duration. Alternatively, the shooting time period may also be set in advance. And the camera shooting equipment shoots the current scene according to the shooting instruction and sends a first image obtained by shooting to the vehicle machine. Those skilled in the art can understand that the shooting function in this embodiment may specifically be a snapshot function of the image capturing apparatus, and correspondingly, the fifth instruction is used to instruct the image capturing apparatus to perform snapshot, and after receiving the fifth instruction, the image capturing apparatus performs snapshot on the current scene.
When the car machine does not send a shooting instruction to the image pickup device, the image pickup device can perform normal shooting according to a mode set by the car machine, and store the first image according to the mode set by the car machine.
In the present embodiment, determining that the shooting function of the image pickup apparatus needs to be started includes the following possible implementations.
A feasible implementation manner is to receive sensing data sent by the sensing equipment and determine that the shooting function of the camera equipment needs to be started according to the sensing data.
Specifically, the sensing device may be an acceleration sensor or a gravity sensor or the like. Optionally, the sensing device may be a vehicle-mounted sensing device, and the vehicle-mounted sensing device has higher sensitivity than other vehicle sensing devices, and may improve the authenticity of the detected event.
The vehicle machine can acquire sensing data sent by the acceleration sensor or the gravity sensor in real time, monitor the sensing data, determine that an emergency happens to the vehicle when the vehicle machine finds that the sensing data is abnormal, and at the moment, determine that a shooting function of the camera equipment needs to be started.
When the sensing data is abnormal, the shooting function of the camera shooting equipment is determined to be started, and the camera shooting equipment can timely record the current scene when the vehicle is collided or violently vibrated, so that evidence is provided for events such as traffic accidents encountered by the vehicle.
In another possible implementation manner, a voice signal input by a user is received, and a shooting function of the image pickup apparatus is determined to be started according to the voice signal.
Specifically, starting the shooting function of the image pickup apparatus may also be triggered by the user. For example, the voice wake-up word is "zebra", and when the car machine receives a voice signal of "zebra snap" input by a user, it determines that a shooting function of the camera device needs to be started according to the voice signal.
The shooting function of camera equipment is confirmed to start through the mode of pronunciation to this embodiment, and the user need not operate with the hand when acquireing oneself required image, has liberated user's both hands for the user can be attentive to drive, has improved the security of driving.
In another feasible implementation manner, the vehicle-mounted device receives shooting information sent by the server, the shooting information comprises a geographical position to be shot, and when the geographical position where the vehicle is located is the geographical position to be shot, the shooting function of the camera device is determined to be started.
Specifically, in the process that the car machine interacts with the server, the car machine can send the geographic position of the vehicle to the server in real time, the server determines whether the geographic position to be shot exists near the geographic position of the vehicle, and if the geographic position to be shot exists near the geographic position of the vehicle, the geographic position to be shot is sent to the car machine. For example, the server may obtain a geographical location with a beautiful landscape through the internet, and when the geographical location with the beautiful landscape exists near the geographical location where the vehicle is located, the server determines that a geographical location to be photographed (the geographical location with the beautiful landscape) exists near the geographical location where the vehicle is located, and the server sends the photographing information to the vehicle machine, wherein the photographing information includes the geographical location to be photographed. The vehicle-mounted device acquires the geographical position of the vehicle in real time, and when the geographical position of the vehicle is the geographical position to be shot, the shooting function of the camera equipment is determined to be started.
In another feasible implementation manner, the third image sent by the camera device is received, the image analysis is performed on the third image, and if the current scene is determined to be the preset scene according to the image analysis result, it is determined that the shooting function of the camera device needs to be started.
Specifically, the car machine has no limitation on the manner in which the camera device transmits the third image. And after the vehicle machine acquires the third image, performing color analysis on the third image to obtain color information of the third image, wherein the color information comprises the type of the color and the area ratio of each color, determining whether the current scene is a preset scene according to the color information, and if so, determining that the shooting function of the camera equipment needs to be started. For example, the type of color corresponding to the third image includes red, green, yellow, brown, and gray. The area occupied by red is 25%, the area occupied by green is 30%, the area occupied by yellow is 25%, the area occupied by brown is 10%, and the area occupied by gray is 10%. And the vehicle machine determines that the current scene is a landscape scene according to the color information, and determines that the shooting function of the camera equipment needs to be started if the current scene is a preset scene. The preset scene in this embodiment is not limited to a landscape scene, but may also be a building scene that meets certain building characteristics, and the specific implementation manner of the preset scene is not particularly limited here.
According to the two feasible implementation modes, the shooting position is obtained from the server or is obtained through image analysis, the scenery along the way can be actively recorded for the user, the attention of a vehicle owner is not disturbed, and the valuable moments and scenery along the way can be obtained without the user performing shooting operation on the scenery along the way.
In another possible implementation manner, a control signal triggered by a user through a hardware device is received, and a shooting function of the image pickup device is determined to be started according to the control signal.
Specifically, the hardware device may be a steering wheel of a vehicle, a touch screen of a vehicle, a center console of a vehicle, and the like. The user operates the hardware devices, thereby triggering the control signal. Taking the hardware device as a steering wheel as an example, the car machine and the steering wheel of the vehicle can be connected through a wired connection or a wireless connection. The car machine receives a control signal triggered by the user through the steering wheel when the user presses the preset button, and then determines that the shooting function of the camera equipment needs to be started according to the control signal. In the driving process of the vehicle, a user needs to always control the steering wheel, so that the control signal is triggered through the steering wheel, and the operation of the user is convenient and quick.
Step 202, obtaining a first geographic position corresponding to the first image, and associating the first image with the first geographic position.
After the car machine acquires the first image shot by the current scene, the car machine acquires a first geographic position corresponding to the first image, namely, the shooting position of the first image. For example, the car machine sends a position acquisition request to a positioning module arranged in the vehicle, and the car machine receives a first geographic position corresponding to a first image returned by the positioning module. Or the positioning module provides the first geographic position of the current scene to the vehicle machine in real time, the vehicle machine displays the driving path on the display screen in real time through the first geographic position, and the vehicle machine can acquire the first geographic position corresponding to the first image according to the current driving path. The first geographic position corresponding to the first image can be acquired by the car machine through other methods as will be understood by those skilled in the art. For example, the car machine may interact with other terminal devices to obtain a first geographic location corresponding to the first image. The embodiment does not particularly limit the specific manner of acquiring the first geographic location.
After the vehicle machine acquires the first geographic position, the vehicle machine associates the first image with the first geographic position. In a specific implementation process, the first image and the first geographic location may be stored in a memory in an associated manner, and then a mapping relationship between the first image and the first geographic location is established, or a location attribute may be added to an attribute of the first image when the first image is stored, where the location attribute includes the first geographic location. The foregoing list schematically lists implementations in which the first image is associated with the first geographic location, and the embodiment is not particularly limited in this regard for other implementations. By associating the first image with the first geographic location, the subsequent user can quickly view the first image taken at the first geographic location.
Step 203, adding a first mark on a first geographic position of the map, wherein the first mark is used for representing that the first geographic position is associated with a first image.
After the first geographic position is obtained, the vehicle machine adds a first mark on the first geographic position of the map. Since the first image has an association relationship with the first geographic location, the operation of adding the first mark to the first geographic location may indicate that the first geographic location is associated with the first image.
Those skilled in the art will appreciate that the first marker may be displayed directly on the map interface on the display screen after the first marker is added to the first geographic location of the map, or may be displayed when the user browses the map interface.
Fig. 3 is a schematic diagram of a user interface state according to an embodiment of the present application. As shown in FIG. 3, a first marker is displayed at a first geographic location of a map interface displayed on a display screen
Figure BDA0000970821310000151
I.e. the first geographical location is marked. The first marker may indicate that the first geographic location has been captured, or that the first geographic location is associated with a first image, or the like. Further, in the embodiments described below, the first mark may also be used to display the first image.
Optionally, on the map interface displayed on the display screen, the driving path of the vehicle may also be displayed in real time. After the vehicle machine acquires the first geographic position, displaying a first mark on the first geographic position on the map interface corresponding to the driving path of the vehicle displayed on the display screen. Fig. 4 is a schematic diagram of a user interface state according to an embodiment of the present application. As shown in fig. 4, the car machine displays a first mark at a first geographical position near a position where an arrow on a traveling path of the vehicle is located
Figure BDA0000970821310000152
Those skilled in the art will appreciate that other positions along the travel path may be covered by
Figure BDA0000970821310000161
The geographic location where the marking is made is the mark that was added by the car machine at the first geographic location prior to the moment. When seeing the first mark, the user can know which geographic positions are shot by the camera device, and can also know that the car machine stores the first images corresponding to the geographic positions, and the follow-up user can view the first images at any time.
According to the image processing method, after the first image corresponding to the current scene is obtained, the first geographical position corresponding to the first image is obtained, namely the specific shooting position of the first image is determined. The first image is then associated with the first geographic location so that the user can quickly view the first image associated with the first geographic location when subsequently viewing the first geographic location. After the association relationship between the first image and the first geographic position is established, the first mark is added to the first geographic position of the map, and the first image corresponding to the current scene is associated with the first geographic position, so that a user can directly know that the camera shooting device shoots the first geographic position by watching the first mark, the complicated operation of the user is avoided, and the complexity of the user operation is reduced. Moreover, since the specific shooting position is provided, the user is not required to identify the specific shooting position according to buildings or roads in the image, and the efficiency of determining the first geographical position by the user is improved.
On the basis of the above embodiments, the present application further displays the first image, which can be implemented in the following feasible implementation manners.
After a first image obtained by shooting a current scene by a camera device is acquired, at least one first image is displayed at a preset position of a map interface displayed on a display screen. The specific implementation process is shown in fig. 5.
Fig. 5 is a schematic diagram of a user interface state according to an embodiment of the present application. As shown in fig. 5, a first mark is displayed on the map interface, and at least one first image is also displayed on the lower right corner of the map interface. Those skilled in the art will appreciate that the present embodiment may display all of the first images on the map interface, or may display a part of the first images on the map interface, and indicate the total number of the first images and the currently displayed number on the map interface. When the currently displayed number is part of the first images, the user can acquire other first images by clicking or sliding the display screen and the like. The preset position in this embodiment may be not only the lower right corner, but also the lower left corner, the upper right corner, and the like. Alternatively, the preset position may also vary with the travel path of the vehicle, i.e. the first image does not obscure the travel path of the vehicle.
In another possible implementation manner, after the first mark is displayed at a first geographical position of a map interface displayed on a display screen, a first instruction triggered by a user operating the first mark is received; and displaying a content display interface on the map interface displayed by the display screen according to the first instruction, wherein at least one first image associated with the first geographic position is displayed in the content display interface. The first instruction is used for indicating that a content display interface is displayed on the map interface. The first instruction may be, for example, a view instruction for viewing the first picture. The specific implementation process is shown in fig. 6. Those skilled in the art will understand that the present embodiment may display all the first images on the content display interface, or may display a part of the first images on the content display interface, and mark the total number of the first images and the currently displayed number on the content display interface. When the currently displayed number is part of the first images, the user can acquire other first images by clicking or sliding the display screen and the like.
Fig. 6 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application. As shown in FIG. 6, when the user wants to view the first image associated with the first geographic location, the user marks the first mark
Figure BDA0000970821310000171
And performing operation to trigger the first instruction, wherein the operation can be clicking the first mark, or pressing the first mark for a long time, or sliding the first mark, and the like. The car machine receives a first instruction triggered by the operation of a first mark by a user. And then the vehicle machine displays a content display interface at the middle position on the map interface displayed by the display screen, wherein at least one first image associated with the first geographic position is displayed in the content display interface. It should be noted that the content display interface may be located in the middle of the map interface or in other positions of the map interface, and the specific position where the content display interface is located is not particularly limited in this embodiment. Optionally, the content display interface may further display the total number of photos and videos included in the shared first image, the first geographic location, the shooting time, and the like.
The two feasible implementation manners enable the user to quickly view the first image associated with the first geographic location without performing cumbersome operations. Further, since the user can view the first image in time, the user can also perform a deletion operation on the first image which is not needed by the user, thereby saving the storage space.
On the basis of the above embodiments, in the present application, the network device may also interact with the network device to implement a sharing function. The network device may be a mobile network device, such as an in-vehicle device, a mobile terminal, or the like, or may be a server, a computer, or the like. The first image is shared with other network equipment, and meanwhile, a second image shared by the other network equipment with the vehicle machine is obtained. The sharing process of the present application is described in detail below with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
First, how to share the first image with the network device, i.e. from the point of view of the car machine, is described.
A feasible implementation manner is that a second instruction triggered by the user operation content display interface is received;
and sending the first geographic position and at least one first image associated with the first geographic position to the network equipment according to the second instruction, so that the network equipment can share the at least one first image. The second instructions are for instructing transmission of the first geographic location and at least one first image associated with the first geographic location to the network device. The second instruction may be, for example, a sharing instruction for sharing the first image. Those skilled in the art will appreciate that the number of first images associated with the first geographic location is plural, and the user may select at least one of the first images to be transmitted to the network device on the content display interface.
Specifically, the car machine may receive a second instruction triggered by the user on the content display interface in a variety of ways. For example, please continue to refer to the finally displayed map interface in fig. 6, the user may click or double click on any first image, and then the in-vehicle machine receives a second instruction triggered by the user through the click or double click on the content display interface, or, to prevent the user from clicking mistakenly, when the user clicks or double clicks on any first image, a window (not shown) for prompting may be further displayed on the content display interface, a "yes" or "no" dialog box is displayed on the window, and when the user clicks "yes", the in-vehicle machine receives a second instruction triggered by the user on the content display interface.
Other implementation manners are also provided in the present application, for example, fig. 7 is a schematic diagram of a state change of a user interface provided in an embodiment of the present application. As shown in fig. 7, in the present embodiment, a user interface, which may be, for example, a window, an icon, a dialog box, a floating box, a button control, etc., is further displayed on the content display interface. The user may trigger the second instruction while operating the user interface. For example, a "missing pin" icon is set on the content display interface of fig. 7, and after the user clicks the "missing pin" icon, the in-vehicle machine receives a second instruction triggered by the user on the content display interface.
And after the vehicle machine receives a second instruction of the user, sending the first geographic position and a first image associated with the first geographic position to the network equipment. Those skilled in the art will appreciate that all of the first images may be sent to the network device after the user clicks the "drop pin", or the user selects at least one first image to be shared first, and then the at least one first image selected by the user may be sent to the network device after the user clicks the "drop pin".
According to the method and the device, the second instruction triggered by the user on the content display interface is obtained, namely the user only needs to perform simple interaction with the display screen, the first image can be shared with other users, and the complexity of user operation is reduced.
Optionally, in this embodiment, when the first image is shared with other users, the first mark is used
Figure BDA0000970821310000181
Is replaced with a second mark
Figure BDA0000970821310000182
Second mark
Figure BDA0000970821310000183
For characterizing that the first image is shared.
In another possible embodiment, a third instruction triggered by the operation of a preset user interface on the display screen by the user is received; and sending the acquired first images meeting the preset sharing condition and the first geographical positions associated with the first images to the network equipment according to a third instruction, so that the network equipment can share the first images. The third instruction is used for indicating that the first images meeting the preset sharing condition and the first geographic positions associated with the first images are sent to the network equipment. The third instruction may be, for example, a sharing instruction for sharing the first picture meeting the preset condition.
Specifically, the first image satisfying the preset sharing condition includes the following feasible implementation manners.
In one possible implementation, at least one first image associated with a first marker displayed on a map interface displayed on a display screen is displayed.
Specifically, a plurality of first marks are displayed on a map interface, a user browses the geographical position in which the user is interested, at the moment, only one part of the map interface is displayed on a display screen, if the user operates a preset user interface, a third instruction is triggered, and after the vehicle-mounted device obtains the third instruction, at least one first image which is displayed on the map interface displayed on the display screen and is associated with the first marks is shared. Those skilled in the art will understand that the embodiment may send all the first images associated with the first tag to the network device, or may send at least one first image to the network device.
Another possible implementation is to take at least one first image whose location is located in a first geographical range. The first geographical range may be preset by the user or may be default by the system. The car machine can send all the first images with the shooting positions located in the first geographic range to the network equipment, and can also send at least one first image to the network equipment.
Fig. 8 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application. As shown in FIG. 8, button controls are also displayed on the display screen
Figure BDA0000970821310000191
When the user clicks the button control
Figure BDA0000970821310000192
And then, according to the third instruction, the acquired first images meeting the preset sharing condition and the first geographic positions associated with the first images are sent to the network equipment.
The foregoing only schematically lists possible implementation manners of the preset sharing condition, and the preset sharing condition may also be other conditions, for example, the preset sharing condition may also be all the obtained first images that are not shared, or the first image whose shooting time meets the preset condition.
As will be understood by those skilled in the art, the first image satisfying the preset sharing condition does not include the first image that has been shared.
Optionally, in this embodiment, when the first image meeting the preset sharing condition is shared with another user, the first mark is used for the first image
Figure BDA0000970821310000193
Is replaced with a second mark
Figure BDA0000970821310000194
Second mark
Figure BDA0000970821310000195
For characterizing that the first image is shared.
According to the embodiment, the first images meeting the preset sharing condition are shared by other users, so that the users can rapidly and conveniently share the large-batch first images to other users.
Secondly, how to obtain the second image shared by other terminal devices is described, that is, from the perspective of the car machine, the car machine obtains the second image shared by other terminal devices.
In a specific embodiment, the second image and the second geographic position associated with the second image sent by the server are received, and a third mark is added to the second geographic position of the map for representing that the second image is associated with the second geographic position.
Optionally, before receiving the second image sent by the server and the second geographic location associated with the second image, the car machine further reports the geographic location information to the server. The implementation mode of reporting the geographical position information to the server by the vehicle machine is as follows:
a feasible implementation manner is to report geographical location information to a server, where the geographical location information includes: a second geographic location; correspondingly, the second image is an image of the other terminal device taken at the second geographic location.
Specifically, the second geographic location may be a current geographic location of the vehicle, which is sent by the vehicle machine to the server in real time, or the current geographic location of the vehicle, which is sent by the vehicle machine to the server according to a preset period, or a user-selected geographic location, which is sent by the vehicle machine to the server.
Another possible implementation manner is to report the geographical location information to the server, where the geographical location information includes: the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position; correspondingly, the second image is an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image.
Specifically, the third geographic location may be a current geographic location of the vehicle, or may be a geographic location selected by the user. After the vehicle machine sends the third geographic position to the server, the server determines a second geographic range corresponding to the third geographic position, wherein the second geographic range can be preset by a user or can be defaulted by the vehicle machine. The second geographic area may be specifically a third geographic location and an area in the vicinity thereof. For example, the second geographic range may be an area covered by a third geographic location as a center and a preset distance as a radius; the second geographical area may also be an administrative area where the third geographical location is located, for example, the third geographical location is located in east way of huai hai, shanghai city, and the second geographical range is huangpu district, shanghai city. The dividing manner of the second geographic area is not particularly limited in this embodiment. It should be noted that the second geographic location is located within the second geographic range, the second geographic location is a shooting location of the second image, and the second geographic location may also be the same geographic location as the third geographic location.
In another possible implementation manner, the geographical location information is reported to the server, where the geographical location information includes: a second geographic range; correspondingly, the second image is an image of the other terminal device captured in the second geographic range, and the second geographic position is a capture position of the second image.
After the second image and the second geographic position are obtained, the second image is associated with the second geographic position, and the specific association manner may refer to the manner in which the first image is associated with the first geographic position, which is not described herein again.
Then, a third mark is added to the second geographic position of the map, and the third mark is used for representing that the second geographic position is associated with a second image. Fig. 9 is a schematic diagram of a user interface state according to an embodiment of the present application. As shown in FIG. 9, after adding the third mark, the in-vehicle machine displays the third mark on the second geographic position of the map interface
Figure BDA0000970821310000211
When the user sees the third mark, the user can know that other users share the second image in the geographic position.
Optionally, the user may also view the second image. Specifically, the car machine receives a fourth instruction triggered by the user operation of a displayed third mark; and displaying a content display interface on the display screen according to the fourth instruction, wherein at least one second image associated with the second geographic position is displayed in the content display interface. The fourth instruction is used for indicating that a content display interface is displayed on the map interface. The fourth instruction may be, for example, a view instruction for viewing the second picture. The way in which the user views the second image is similar to the way in which the user views the first image, and this embodiment is not described herein again, and only a specific example is taken as an example.
Fig. 10 is a schematic diagram illustrating a state change of a user interface according to an embodiment of the present application. As shown in FIG. 10, when the user wants to view the second image associated with the second geographic location, the user clicks the third mark
Figure BDA0000970821310000212
And the vehicle machine receives a fourth instruction triggered by the user clicking the third mark. And then the vehicle machine displays a content display interface at the middle position on the map interface displayed by the display screen, and at least one second image related to the second geographic position is displayed in the content display interface.
This application is through the second geographical position that other users of mark shared to show the second image that this second geographical position is correlated, make the user can learn that second geographical position has which interesting place, provide the reference for user's trip and tourism, make the user can be in a trip, go by a trip scenery and interesting etc. not.
It should be noted that the above-mentioned various reference signs are merely exemplary reference signs, and do not limit the present application, and other reference signs may be applied to the present application.
In the following, taking an infrastructure in which the image processing apparatus is implemented in the server as an example, the server is taken as an execution subject, and from the perspective of the server, the interaction between the server and the terminal device is described to implement the sharing process. In this embodiment, the terminal device is a car machine, and the vehicle is a vehicle, for example, which will be described in detail.
Fig. 11 is a signaling flow diagram illustrating an image processing method according to an embodiment of the present application. As shown in fig. 11, the process includes:
and S11, the car machine sends the first geographic position to the server.
The first geographic location in this embodiment may specifically be the first geographic location where the vehicle is located, which is sent to the server by the vehicle machine in real time, or the first geographic location where the vehicle is located, which is sent to the server by the vehicle machine according to a preset period, or the first geographic location where the user is interested, which is sent to the server by the vehicle machine.
S12, the server determines a first image, wherein the first image is an image of other car machines shot in a first geographic range corresponding to the first geographic position.
For an implementation manner of the first geographic range, reference may be made to the geographic range described in the foregoing embodiment, and details are not described herein, and a person skilled in the art may understand that the determined first image may be all images captured by all other vehicle machines in the first geographic range corresponding to the first geographic position, may also be an image captured by a part of other vehicle machines in the first geographic range corresponding to the first geographic position, and may also be at least one image captured by at least one vehicle machine in the first geographic range corresponding to the first geographic position.
Optionally, before S12, this embodiment may further include S10A and S10B, where S10A and S10B specifically are:
S10A, the other car machine sends the first image and the second geographic position associated with the first image to the server.
It should be noted that the second geographic location is the shooting location of the first image, and the second geographic location is located within the first geographic range.
S10B, the server associates the first image with the second geographic location.
S13, the server sends the first image and the second geographic position associated with the first image to the car machine.
The server sends the first image and the second geographic position to the vehicle machine, and the vehicle machine can correlate the first image and the second geographic position. And the second geographic position is the shooting position of the first image.
The server of the embodiment determines that a first image shot by other vehicle machines in a first geographic range corresponding to the first geographic position exists, and sends the first image and the second geographic position of the shot position of the first image to the vehicle machines, so that a user using the vehicle machines can conveniently and quickly know the first image shared by users around the current geographic position and the second geographic position associated with the first image through the vehicle machines. Meanwhile, the first image and the second geographic position can provide reference for the user to go out or play in time, and interestingness of the user to go out is improved.
Fig. 12 is a signaling flowchart of a method for processing an image according to an embodiment of the present application. The process comprises the following steps:
and S21, the vehicle machine sends the first geographic position of the vehicle to the server.
The implementation process of S21 is similar to that of S11, and reference may be made to the above embodiments specifically, which are not described herein again.
And S22, the server determines a third geographical position meeting the preset shooting condition in a second geographical range corresponding to the first geographical position.
The second geographical range is determined in a manner similar to that of the first geographical range, and this embodiment is not described herein again. The second geographical range may be the same as or different from the first geographical range.
The preset shooting condition may be that the number of times or frequency of shooting the geographic location is higher than a preset value, or may also be that the number of times of being reviewed in the social community of the internet exceeds a preset value, or the geographic location is a national level scenic spot, or the like.
The server checks each geographic position in the second geographic range to determine whether a third geographic position meeting the preset shooting condition exists.
And S23, the server sends shooting information to the vehicle machine, wherein the shooting information comprises a third geographical position, and the shooting information is used for indicating that the shooting function of the camera equipment needs to be started when the vehicle is located at the third geographical position.
In this embodiment, the third geographic position meeting the preset shooting condition is determined in the second geographic range corresponding to the first geographic position, and the shooting information including the third geographic position and having the shooting value is sent to the terminal device, so that the terminal device can realize automatic shooting, and the user does not miss nice moments.
A specific example is given below to illustrate how the third geographical location is determined by the present application.
Fig. 13 is a signaling flowchart of a method for processing an image according to an embodiment of the present application. The process comprises the following steps:
and S31, the vehicle machine sends the first geographic position of the vehicle to the server.
The implementation process of S31 is similar to that of S11, and reference may be made to the above embodiments specifically, which are not described herein again.
And S32, the server determines shot information of each fourth geographic position according to the second image sent by the other vehicle machine and the fourth geographic position associated with the second image.
The second image is an image of the other vehicle machine captured in a second geographic range corresponding to the first geographic position, and the fourth geographic position is a capture position of the second image.
Optionally, before S32, S30A and S30B are further included, which are as follows:
S30A, the other car machine sends the second image and the fourth geographic location associated with the second image to the server.
S30B, the server associates the second image with the fourth geographic location.
And S33, the server determines a third geographical position meeting the preset shooting condition in the fourth geographical positions according to the shot information of the fourth geographical positions.
The photographed information may specifically be a frequency or a probability that the fourth geographical location is photographed within a preset time period, and the like. The preset shooting condition may specifically be that the shooting frequency or probability is greater than a preset value, the shooting frequency or probability is ranked before a preset ranking, and the like.
And S34, the server sends shooting information to the vehicle machine, wherein the shooting information comprises a third geographical position, and the shooting information is used for indicating that the shooting function of the camera equipment needs to be started when the vehicle is located at the third geographical position.
The server checks each geographic position in the second geographic range to determine whether a third geographic position meeting the preset shooting condition exists.
In the embodiment, the second images shot by other terminal devices and the associated fourth geographic positions are analyzed in the second geographic range corresponding to the first geographic position, the third geographic positions with a large number of shot images are extracted, valuable shooting positions can be obtained, and shooting information including the third geographic positions with shooting values is sent to the terminal devices, so that the terminal devices can automatically shoot without missing good moments.
A processing apparatus of an image according to one or more embodiments of the present application will be described in detail below. The processing means of these images can be implemented in the infrastructure of the vehicle or terminal device, as well as in the interactive system of server and client. Those skilled in the art will appreciate that the processing means for these images can be constructed by configuring the steps taught in the present scheme using commercially available hardware components. For example, the processor components (or processing modules, processing units) may use components such as single-chip, micro-controllers, microprocessors, etc. from texas instruments, intel corporation, ARM corporation, etc.
Fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 14, the apparatus includes:
an input module 1401, configured to acquire a first image corresponding to a current scene;
an association module 1402, configured to obtain a first geographic location corresponding to the first image, and associate the first image with the first geographic location;
a marking module 1403, configured to add a first mark to the first geographic location of the map, so as to represent that the first geographic location is associated with the first image.
The image processing apparatus provided in this embodiment may be used to implement the method embodiments described above, and the implementation principle and technical effect are similar, which is not described herein again.
Fig. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The embodiment is implemented on the basis of the embodiment in fig. 14, and specifically includes the following steps:
optionally, the method further comprises: the first display module 1404 is configured to display at least one first image at a preset position of a map interface displayed on the display screen.
Optionally, the method further comprises: a second display module 1405 for displaying the first marker at the first geographic location of the map interface displayed by the display screen.
Optionally, the input module 1401 is further configured to receive a first instruction triggered by a user operating the first marker;
the second display module 1405 is further configured to display a content display interface on the map interface displayed on the display screen according to the first instruction, where at least one first image associated with the first geographic location is displayed in the content display interface.
Optionally, the method further comprises: a first output module 1406 for outputting a first output signal,
the input module 1401 is further configured to receive a second instruction triggered by the user operating the content display interface;
the first output module 1406 is configured to send the first geographic location and the at least one first image associated with the first geographic location to a network device according to the second instruction, so that the network device performs sharing of the at least one first image.
Optionally, the marking module 1403 is further configured to replace the first mark with a second mark, where the second mark is used to represent that the first image is shared.
Optionally, the method further comprises: a second output module 1407;
the input module 1401 is further configured to receive a third instruction triggered by a user operating a preset user interface;
the second output module 1407 is configured to send, according to the third instruction, the acquired first image meeting the preset sharing condition and the first geographic location associated with each first image to a network device, so that the network device shares each first image.
Optionally, the input module 1401 is further configured to receive a second image sent by the server and a second geographic location associated with the second image;
the marking module 1403 is further configured to add a third mark to the second geographic location of the map, so as to represent that the second geographic location is associated with the second image.
Optionally, the method further comprises: a third output module 1408 of the output module is provided,
the third output module 1408 is configured to:
reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic location;
correspondingly, the second image is an image shot by other terminal equipment at the second geographic position;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image is an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image is an image shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
Optionally, the method further comprises: the third display module 1409 is configured to display,
the input module 1401 is further configured to receive a fourth instruction triggered by the user operating the displayed third mark;
the third display module 1409 is configured to display a content display interface on the display screen according to the fourth instruction, where a second image associated with the second geographic location is displayed in the content display interface.
Optionally, the method further comprises: a photographing module 1410 and a fourth output module 1411;
the shooting module 1410 is configured to determine that a shooting function of the image capturing apparatus needs to be started;
the fourth output module 1411 is configured to send a fifth instruction to the image capturing apparatus to instruct the image capturing apparatus to capture an image;
the input module 1401 is specifically configured to: and acquiring a first image obtained by shooting the current scene by the camera equipment.
Optionally, the shooting module 1410 is specifically configured to,
receiving sensing data sent by sensing equipment, and determining that a shooting function of the camera equipment needs to be started according to the sensing data; or
Receiving shooting information sent by a server, wherein the shooting information comprises a geographical position to be shot, and when the geographical position of a vehicle is the geographical position to be shot, determining that the shooting function of the camera equipment needs to be started; or
Receiving a voice signal input by a user, and determining that a shooting function of the camera equipment needs to be started according to the voice signal; or
Receiving a third image, carrying out image analysis on the third image, determining that the current scene is a preset scene according to an image analysis result, and determining that the shooting function of the camera equipment needs to be started; or
And receiving a control signal triggered by a user through hardware equipment, and determining that the shooting function of the camera equipment needs to be started according to the control signal.
The apparatus provided in this embodiment may be used to implement the method embodiments described above, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 16 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 16, the apparatus includes:
an input module 1601, configured to receive a first geographic location sent by a terminal device;
a processing module 1602, configured to determine a first image, where the first image is an image captured by another terminal device within a first geographic range corresponding to the first geographic location;
an output module 1603, configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
The image processing apparatus provided in this embodiment may be used to implement the method embodiments described above, and the implementation principle and technical effect are similar, which is not described herein again.
Fig. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. This embodiment on the basis of the embodiment of figure 17,
the input module 1601 is further configured to receive the first image and a second geographic location associated with the first image sent by another terminal device;
the processing module 1602 is further configured to associate the first image with the second geographic location.
Optionally, the method further comprises: the photographing module 1604 is used for photographing the image,
the shooting module 1604 is configured to determine a third geographic location meeting a preset shooting condition in a second geographic range corresponding to the first geographic location;
the output module 1603 is further configured to send shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that a shooting function of an image pickup device needs to be started when a vehicle is located at the third geographic location.
Optionally, the input module 1601 is further configured to receive a second image sent by another terminal device and a fourth geographic location associated with the second image, where the second image is an image captured by the other terminal device in a second geographic range corresponding to the first geographic location, and the fourth geographic location is a capture location of the second image;
the shooting module 1604 is specifically configured to determine shot information of each fourth geographic location according to the second image sent by the other terminal device and a fourth geographic location associated with the second image; and determining a third geographical position meeting preset shooting conditions in the fourth geographical positions according to the shot information of the fourth geographical positions.
The apparatus provided in this embodiment may be used to implement the method embodiments described above, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 18 is a schematic hardware structure diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 18, the apparatus provided in the present embodiment includes: an input device 181, a processor 182, an output device 183, a display 184, memory 185, and at least one communication bus 186. The communication bus 186 is used to enable communication connections between the elements. The memory 185 may include a high speed RAM memory and may also include a non-volatile storage NVM, such as at least one disk memory, in which various programs may be stored in the memory 185 for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 182 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 182 is coupled to the input device 181 and the output device 183 through a wired or wireless connection.
Optionally, the input device 181 may comprise a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, and a transceiver. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like.
Alternatively, the image processing device may be a device for image processing of a vehicle, for example, a device for image processing of a vehicle, a device for image processing of an aircraft, a device for image processing of a water transport, or the like. With regard to the details of the apparatus for image processing of a vehicle, the present application provides another embodiment for description, please refer to the following embodiments, which will not be described in detail herein.
Optionally, the input device 181 is configured to acquire a first image corresponding to a current scene;
the processor 182, coupled to the input device 181, is configured to acquire a first geographic location corresponding to the first image and associate the first image with the first geographic location;
the processor 182 is further configured to add a first marker to the first geographic location of the map, for indicating that the first geographic location is associated with the first image.
Optionally, the method further comprises: a display screen 184, the display screen 184 coupled to the processor 182, the processor 182 further configured to control the display screen 184 to display at least one first image at a preset location of the displayed map interface.
Optionally, the method further comprises: a display screen 184, the display screen 184 coupled to the processor 182, the processor 182 further configured to control the display screen 184 to display the first marker at the first geographic location of the displayed map interface.
Optionally, the input device 181 is further configured to receive a first instruction triggered by a user operating the first marker;
the processor 182 is further configured to control the display screen 184 to display a content display interface on the displayed map interface according to the first instruction, where at least one first image associated with the first geographic location is displayed in the content display interface.
Optionally, the method further comprises: an output device 183, said output device 183 coupled to said processor 182;
the input device 181 is further configured to receive a second instruction triggered by the user operating the content display interface;
the processor 182 is further configured to control the output device 183 to send the first geographic location and the at least one first image associated with the first geographic location to a network device according to the second instruction, so that the network device performs sharing of the at least one first image.
Optionally, the processor 182 is further configured to replace the first mark with a second mark, where the second mark is used to represent that the first image is shared.
Optionally, the method further comprises: an output device 183, said output device 183 coupled to said processor 182;
the input device 181 is further configured to receive a third instruction triggered by a user operating a preset user interface;
the processor 182 is further configured to control, according to the third instruction, the output device 183 to send the acquired first image meeting the preset sharing condition and the first geographic location associated with each first image to a network device, so that the network device shares each first image.
Optionally, the input device 181 is further configured to receive a second image sent by the server and a second geographic location associated with the second image;
the processor 182 is further configured to add a third mark to the second geographic location of the map, for indicating that the second geographic location is associated with the second image.
Optionally, the method further comprises: an output device 183, said output device 183 coupled to said processor 182;
the processor 182 is further configured to determine that a shooting function of the image capturing apparatus needs to be started;
the output device 183 is configured to send a fifth instruction to the image capturing device to instruct the image capturing device to capture an image;
the input device 181 is specifically configured to acquire a first image obtained by shooting a current scene by the imaging device.
Optionally, the input device 181 is further configured to receive sensing data sent by a sensing device, and the processor 182 is further configured to determine, according to the sensing data, that a shooting function of the image capturing device needs to be started; or
The input device 181 is further configured to receive shooting information sent by a server, where the shooting information includes a geographic location to be shot, and the processor 182 is further configured to determine that a shooting function of the camera needs to be started when the geographic location where the vehicle is located is the geographic location to be shot; or
The input device 181 is further configured to receive a voice signal input by a user, and the processor 182 is further configured to determine, according to the voice signal, that a shooting function of the image capturing apparatus needs to be started; or
The input device 181 is further configured to receive a third image, and the processor 182 is further configured to perform image analysis on the third image, and determine that a shooting function of the image capturing device needs to be started if it is determined that a current scene is a preset scene according to an image analysis result; or
The input device 181 is further configured to receive a control signal triggered by a user through a hardware device, and the processor 182 is further configured to determine that a shooting function of the image capturing apparatus needs to be started according to the control signal.
The device provided in this embodiment may be used to execute the method embodiments described in fig. 2 to fig. 10, which have similar implementation principles and technical effects, and this embodiment is not described herein again.
Fig. 19 is a schematic hardware configuration diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 19, the devices may include an input device 191, a processor 192, an output device 193, a memory 194, and at least one communication bus 195. The communication bus 195 is used to enable communication connections between the elements. Memory 194 may include a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, where various programs may be stored in memory 194 for performing various processing functions and implementing the method steps of the present embodiment.
Optionally, the input device 191 is configured to receive a first geographic location sent by the terminal device;
the processor 192, coupled to the input device 191 and the output device 193, is configured to determine a first image, which is an image captured by other terminal devices within a first geographic range corresponding to the first geographic location;
the output device 193 is configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
Optionally, the processor 192 is further configured to determine, within a second geographic range corresponding to the first geographic position, a third geographic position meeting a preset shooting condition;
the output device 193 is further configured to send shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that a shooting function of a camera device needs to be started when the vehicle is located at the third geographic location.
Optionally, the input device 191 is further configured to receive a second image sent by another terminal device and a fourth geographic location associated with the second image, where the second image is an image captured by the other terminal device in a second geographic range corresponding to the first geographic location, and the fourth geographic location is a capture location of the second image;
the processor 192 is further configured to determine, according to the second image sent by the other terminal device and a fourth geographic location associated with the second image, captured information of each fourth geographic location, and determine, according to the captured information of each fourth geographic location, a third geographic location in the fourth geographic location, where the third geographic location meets a preset capturing condition.
The device provided in this embodiment may be used to execute the method embodiments described in fig. 11 to fig. 13, which have similar implementation principles and technical effects, and this embodiment is not described herein again.
Fig. 20 is a schematic hardware configuration diagram of an image processing apparatus according to an embodiment of the present application. FIG. 20 is a specific embodiment of FIG. 18 in an implementation. The processing device of the image may be, for example, a terminal device. As shown in fig. 20, the image processing apparatus of the present embodiment includes a processor 11 and a memory 12.
The processor 11 executes the computer program code stored in the memory 12 to implement the image processing method of fig. 2 to 10 in the above embodiment.
Optionally, the processor 11 is provided in the processing assembly 10. The apparatus for processing an image may further include: communication component 13, power component 14, multimedia component 15, audio component 16, input/output interface 17, sensor component 18.
The processing assembly 10 generally controls the overall operation of the image processing device. The processing component 10 may include one or more processors 11 to execute instructions to perform all or part of the steps of the methods of fig. 2-10. Further, the processing component 10 may include one or more modules that facilitate interaction between the processing component 10 and other components. For example, the processing component 10 may include a multimedia module to facilitate interaction between the multimedia component 15 and the processing component 10.
The power supply component 14 provides power to the various components of the image processing device. The power components 14 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the processing device of the image.
The multimedia assembly 15 comprises a display screen providing an output interface between the processing device of the image and the user. The display screen may display the map interface in the above embodiments. The display screen includes a touch panel, which may be implemented as a touch screen to receive instructions from a user operating a user interface input. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 16 is configured to output and/or input audio signals. For example, the audio component 16 includes a Microphone (MIC) configured to receive an external audio signal, such as the aforementioned "zebra," when the image processing device is in an operational mode, such as a speech recognition mode. The received audio signal may further be stored in the memory 12 or transmitted via the communication component 13. In some embodiments, audio assembly 16 also includes a speaker for outputting audio signals.
The input/output interface 17 provides an interface between the processing component 10 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 18 includes one or more sensors for providing various aspects of state assessment for the processing device of the image. For example, the sensor assembly 18 may detect the open/closed status of the processing device of the image, the relative positioning of the assemblies, the presence or absence of user contact with the processing device of the image. The sensor assembly 18 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. In some embodiments, the sensor assembly 18 may also include acceleration sensors, gyroscope sensors, gravity sensors, and the like.
The communication assembly 13 is configured to facilitate communication between the processing device of the image and other devices in a wired or wireless manner. The processing device of the image may have access to a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the image processing device may include a SIM card slot for inserting a SIM card therein, so that the image processing device can log on to a GPRS network and establish communication with a server via the internet.
In an exemplary embodiment, the processing device of the image may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
On the basis of the above description of the image processing apparatus of the embodiment shown in fig. 19, the present application also provides another embodiment, and specifically discloses an apparatus for image processing of a vehicle. Optionally, the device for image processing of the vehicle may be a vehicle-mounted device, a device attached after the vehicle leaves a factory, or the like.
Specifically, the apparatus for image processing of a vehicle may include: an onboard input device and an onboard processor; optionally, an on-board output device may also be included, as well as other additional devices.
It should be noted that, in the "onboard input device", "onboard output device", and "onboard processor" related to the embodiment of the present application, the onboard input device "," onboard output device ", and" onboard processor "may be carried on a vehicle, or the" onboard input device "," onboard output device ", and" onboard processor "may be carried on an aircraft, or may be carried on other types of vehicles, and the meaning of the" onboard "is not limited in the embodiment of the present application. Taking the vehicle as an example, the onboard input device may be an onboard input device, the onboard processor may be an onboard processor, and the onboard output device may be an onboard output device.
Depending on the type of vehicle installed, the onboard input device may include a variety of input devices, and may include at least one of a user-oriented onboard user interface, a device-oriented onboard device interface, an onboard programmable interface for software, and a transceiver, for example. Optionally, the vehicle-mounted device interface facing the device may be a wired interface (for example, a connection interface with a vehicle event data recorder on a console of the vehicle) for performing data transmission between the devices, or may be a hardware insertion interface (for example, a USB interface, a serial port, or the like) for performing data transmission between the devices; alternatively, the user-oriented in-vehicle user interface may be, for example, a steering wheel control key for a vehicle, a center control key for a large or small vehicle, a voice input device for receiving voice input (e.g., a microphone mounted on a steering wheel or an operating rudder, a central sound collection device, etc.), and a touch sensing device (e.g., a touch screen with touch sensing function, a touch pad, etc.) for receiving user touch input by a user; optionally, the vehicle-mounted programmable interface of the software may be, for example, an entry in a vehicle control system, which can be edited or modified by a user, such as an input pin interface or an input interface of a large chip or a small chip related in a vehicle; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, and the like, which have a communication function in a vehicle. According to the method in the embodiment corresponding to fig. 2 to 10, the onboard input device is used for acquiring the first image corresponding to the current scene. Correspondingly, when the device for image processing of the vehicle is a central control unit or other device on the vehicle, the vehicle-mounted input device may be a device transmission interface for communicating with various service sources inside the vehicle, and may also be a transceiver with a communication function. The on-board input device is also used to receive various commands triggered by the user. Correspondingly, when the device for image processing of the vehicle is a central control unit or other device on the vehicle, the vehicle-mounted input device may be a steering wheel control key for the vehicle, a central control key for a large vehicle or a small vehicle, a voice input device for receiving voice input, a touch sensing device (such as a touch screen with a touch sensing function, a touch pad, and the like) for receiving user touch input by a user, and the like.
Depending on the type of vehicle being installed, the onboard processor may be implemented using various Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), Central Processing Units (CPUs), controllers, micro-controllers, microprocessors, or other electronic components, and may be used to perform the methods described above. The onboard processor is coupled to the onboard input device and the onboard output device via an in-vehicle line or wireless connection. The onboard processor may perform the methods of the embodiments corresponding to fig. 2-10 described above.
Depending on the type of vehicle in which it is installed, the onboard output device may be a transceiver that establishes wireless transmissions with a user's handheld device or the like, or may be various display devices on the vehicle. The display device can be various display devices used in the industry, and can also be a head-up display with a projection function. The onboard output device of the embodiment may perform the method in the embodiments corresponding to fig. 2 to 10.
Fig. 21 is a schematic hardware structure diagram of an image processing apparatus according to an embodiment of the present application. FIG. 21 is a specific embodiment of FIG. 19 in an implementation. The image processing device may be, for example, a server, and as shown in fig. 21, the image processing device provided in this embodiment includes a processor and a memory 22. Optionally, the processor is provided in the processing assembly 20.
The processor executes the computer program code stored in the memory 22 to implement the image processing method shown in fig. 11 to 13 in the above embodiment.
Optionally, the image processing apparatus may further include: a power supply component 23, a network interface 24, and an input/output interface 25.
Among other things, processing component 20, which further includes one or more processors, and memory resources, represented by memory 22, for storing instructions, such as application programs, that are executable by processing component 20. The application programs stored in memory 22 may include one or more modules that each correspond to a set of instructions. Further, the processing component 20 is configured to execute instructions to perform the processing method of the images in the embodiments of fig. 11 to 13 described above.
The image processing apparatus may further include a power supply component 23 configured to perform power management of the image processing apparatus, a wired or wireless network interface 24 configured to connect the image processing apparatus to a network, and an input/output (I/O) interface 25. The image processing device may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Fig. 22 is a schematic structural diagram of a user interface system according to an embodiment of the present application. As shown in fig. 22, includes:
a display component 2201 for displaying a map interface;
a processor 2202 configured to trigger the display component 2202 to display a first marker at a first geographic location of a map interface, the first marker indicating that the first geographic location is associated with the first image.
The map interface status diagram provided in this embodiment may specifically be that, as shown in fig. 3, a first mark is displayed at a first geographic position of the map interface
Figure BDA0000970821310000361
Optionally, the processor 2202 is further configured to trigger the display component 2201 to display a first image associated with the first geographic location at a preset position of the map interface. Specifically, as shown in fig. 5, a first mark is displayed on the map interface, and a first image is also displayed on the lower right corner of the map interface.
Optionally, the processor 2202 is further configured to trigger the display component 2201 to display a second mark on a second geographic location of the map interface, where the second mark is used to represent that a second image associated with the second geographic location is shared. Specifically, as shown in the schematic diagram of the user interface on the right side of fig. 7, a second mark is further displayed at a second geographic location of the map interface
Figure BDA0000970821310000362
Optionally, the processor 2202 is further configured to trigger the display component 2201 to display a third geographic position in the map interfaceAnd displaying a third mark at the position, wherein the third mark is used for representing that the third geographic position is associated with a third image shared by other users. Specifically, as shown in fig. 9, a third mark is further displayed at a third geographic position of the map interface
Figure BDA0000970821310000363
Optionally, the processor 2202 is further configured to trigger the display component 2201 to display a content display interface 2201 on the map interface based on a user operation, where an image corresponding to each marker is displayed on the content display interface 2201. Specifically, as shown in fig. 6 and 10.
Optionally, the processor 2202 is further configured to trigger the display component 2201 to display a user interface on the content display interface 2201, where the user interface is configured to trigger various instructions by a user. This may be embodied as the "drop pin" icon as shown in figure 7 above.
The user interface system provided by the embodiment can visually and intuitively display the geographical position of the shot image and whether other images shared by users exist on the map interface. In addition, the user can share the image to other users by operating the user interface on the map interface, the operation flow is simple, the user can share the shot image to other users quickly in the driving environment, and the driving safety of the user is improved.
The application also provides a vehicle-mounted internet operating system. It will be understood by those skilled in the art that the hardware of the image processing apparatus shown in fig. 18 or fig. 20 described above or the hardware of the apparatus for image processing of a vehicle referred to in the present application and the computer program of the software resource referred to in the present application, which the on-vehicle internet operating system can manage and control, is system software directly running on the above apparatus. The operating system is the interface between the user and the device, and is also the interface between hardware and other software.
The vehicle-mounted internet operating system can interact with other modules or functional equipment on a vehicle to control functions of the corresponding modules or functional equipment.
Specifically, taking the vehicle and the image processing device as the vehicle-mounted terminal device in the above embodiments as examples, based on the development of the vehicle-mounted internet operating system and the vehicle communication technology provided by the present application, the vehicle can be connected with the service end to form a network through the vehicle-mounted terminal device to form a vehicle-mounted internet. The vehicle-mounted internet system can provide voice communication service, positioning service, navigation service, mobile internet access, vehicle emergency rescue, vehicle data and management service, vehicle-mounted entertainment service and the like.
The following describes in detail a schematic structural diagram of the vehicle-mounted internet operating system provided by the present application. Fig. 23 is a schematic structural diagram of a vehicle-mounted internet operating system according to an embodiment of the present application. As shown in fig. 23, the operating system provided by the present application includes: an image control unit 231 and an association control unit 232.
The image control unit 231 is used for controlling the vehicle-mounted input equipment to acquire a first image corresponding to the current scene;
the association control unit 232 obtains a first geographic position corresponding to the first image, obtains a map with a first marker added to the first geographic position, and is used for representing that the first geographic position is associated with the first image, where the map with the first marker is obtained by adding the first marker to the first geographic position of the original map.
Specifically, the in-vehicle input device in the present embodiment may include the input device in the above-described embodiment, and the image control unit 231 may control the in-vehicle input device to acquire the first image corresponding to the current scene.
In particular, the association control unit 232 may add a first marker on a first geographical location of the original map by the image processing system. The image processing system may be an operating system implemented function, or the image processing system may be a processor implemented function in the above embodiments.
Further, the vehicle-mounted internet operating system may control the corresponding components to perform the methods described in fig. 2 to 10 by the image control unit 231 and the association control unit 232, or on the basis of the two units, in combination with other units.
The present application also provides a processor-readable storage medium, in which program instructions are stored, and the program instructions are used to make a processor of an image processing device execute the image processing method in the embodiments of fig. 2 to 10.
The present application also provides a processor-readable storage medium, in which program instructions are stored, the program instructions being configured to cause a processor of an image processing apparatus to execute the image processing method in the embodiments of fig. 11 to 13 described above.
The readable storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (56)

1. A method of processing an image, comprising:
acquiring a first image corresponding to a current scene;
acquiring a first geographical position corresponding to the first image, and associating the first image with the first geographical position;
adding a first marker on the first geographic position of the map for indicating that the first geographic position is associated with the first image;
the method further comprises the following steps: receiving a second image sent by a server and a second geographic position associated with the second image;
adding a third mark on the second geographic position of the map, wherein the third mark is used for representing that the second geographic position is associated with the second image;
the second geographic position is a geographic position selected by the user and sent to the server by the vehicle machine;
before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
2. The method of claim 1, wherein after the obtaining the first image corresponding to the current scene, further comprising:
and displaying at least one first image at a preset position of a map interface displayed on the display screen.
3. The method of claim 1, wherein after adding the first marker at the first geographic location of the map, further comprising:
displaying the first marker at the first geographic location of a map interface displayed by a display screen.
4. The method of claim 3, wherein after displaying the first marker at the first geographic location of a map interface displayed by a display screen, further comprising:
receiving a first instruction triggered by the operation of the first mark by a user;
and displaying a content display interface on the map interface displayed by the display screen according to the first instruction, wherein at least one first image associated with the first geographic position is displayed in the content display interface.
5. The method of claim 4, after displaying a content display interface on the map interface displayed by the display screen, further comprising:
receiving a second instruction triggered by the user operating the content display interface;
and sending the first geographic position and at least one first image associated with the first geographic position to network equipment according to the second instruction, so that the network equipment can share the at least one first image.
6. The method of claim 5, wherein after sending the first geographic location and the at least one first image associated with the first geographic location to a network device, further comprising:
replacing the first mark with a second mark, the second mark being used to characterize that the first image has been shared.
7. The method of claim 1, further comprising:
receiving a third instruction triggered by the operation of a preset user interface by a user;
and sending the acquired first images meeting the preset sharing condition and the first geographical positions associated with the first images to network equipment according to the third instruction, so that the network equipment can share the first images.
8. The method according to claim 7, wherein the first image satisfying the preset sharing condition comprises:
at least one first image associated with a first mark displayed on a map interface displayed on a display screen; or
At least one first image having a location within a first geographic range is captured.
9. The method of claim 1, after adding a third marker on the second geographic location of the map, further comprising:
receiving a fourth instruction triggered by the user operation of the displayed third mark;
and displaying a content display interface on the display screen according to the fourth instruction, wherein at least one second image associated with the second geographic position is displayed in the content display interface.
10. The method of claim 1, wherein before the obtaining the first image corresponding to the current scene, further comprising:
determining that a shooting function of the camera equipment needs to be started;
sending a fifth instruction to the image pickup device to instruct the image pickup device to perform shooting;
the acquiring of the first image corresponding to the current scene includes:
and acquiring a first image obtained by shooting the current scene by the camera equipment.
11. The method of claim 10, wherein the determining that a capture function of the imaging device needs to be activated comprises:
receiving sensing data sent by sensing equipment, and determining that a shooting function of the camera equipment needs to be started according to the sensing data; or
Receiving shooting information sent by a server, wherein the shooting information comprises a geographical position to be shot, and when the geographical position of a vehicle is the geographical position to be shot, determining that the shooting function of the camera equipment needs to be started; or
Receiving a voice signal input by a user, and determining that a shooting function of the camera equipment needs to be started according to the voice signal; or
Receiving a third image, carrying out image analysis on the third image, determining that the current scene is a preset scene according to an image analysis result, and determining that the shooting function of the camera equipment needs to be started; or
And receiving a control signal triggered by a user through hardware equipment, and determining that the shooting function of the camera equipment needs to be started according to the control signal.
12. A method of processing an image, comprising:
receiving a first geographical position sent by terminal equipment;
determining a first image, wherein the first image is an image shot by other terminal equipment in a first geographical range corresponding to the first geographical position;
sending the first image and a second geographic position associated with the first image to the terminal equipment, wherein the second geographic position is a shooting position of the first image;
before sending the second image and the second geographic location associated with the second image to the terminal device, the method further includes:
receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
13. The method of claim 12, wherein prior to determining the first image, further comprising:
receiving the first image and a second geographic position associated with the first image sent by other terminal equipment;
associating the first image with the second geographic location.
14. The method of claim 12, further comprising:
determining a third geographical position meeting a preset shooting condition in a second geographical range corresponding to the first geographical position;
and sending shooting information to the terminal equipment, wherein the shooting information comprises the third geographical position, and the shooting information is used for indicating that the shooting function of the camera equipment needs to be started when the vehicle is located at the third geographical position.
15. The method of claim 14, wherein before determining a third geographic location satisfying a preset shooting condition in a second geographic range corresponding to the first geographic location, the method further comprises:
receiving a second image sent by other terminal equipment and a fourth geographic position associated with the second image, wherein the second image is an image shot by the other terminal equipment in a second geographic range corresponding to the first geographic position, and the fourth geographic position is a shooting position of the second image;
determining a third geographical position meeting a preset shooting condition in a second geographical range corresponding to the first geographical position, wherein the third geographical position comprises the following steps:
according to the second images sent by other terminal equipment and fourth geographic positions related to the second images, shooting information of the fourth geographic positions is determined;
and determining a third geographical position meeting preset shooting conditions in the fourth geographical positions according to the shot information of the fourth geographical positions.
16. The method of claim 15, wherein the captured information comprises: the frequency of shooting each fourth geographical position within a preset time period;
the preset shooting condition is that the frequency of being shot is greater than a preset value, or the frequency of being shot is before a preset ranking.
17. An apparatus for processing an image, comprising:
the input module is used for acquiring a first image corresponding to a current scene;
the association module is used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the marking module is used for adding a first mark on the first geographic position of the map and is used for representing that the first geographic position is associated with the first image;
the input module is further used for receiving a second image sent by the server and a second geographic position associated with the second image;
the marking module is further configured to add a third mark to the second geographic location of the map, so as to represent that the second geographic location is associated with the second image;
the second geographic position is a geographic position selected by the user and sent to the server by the vehicle machine;
further comprising: a third output module for outputting a third output signal,
the third output module is configured to:
reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image is an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image is an image shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
18. The apparatus of claim 17, further comprising: the first display module is used for displaying at least one first image at a preset position of a map interface displayed on the display screen.
19. The apparatus of claim 17, further comprising: and the second display module is used for displaying the first mark on the first geographical position of the map interface displayed by the display screen.
20. The apparatus of claim 19, wherein the input module is further configured to receive a first instruction triggered by a user operating the first marker;
the second display module is further configured to display a content display interface on the map interface displayed on the display screen according to the first instruction, where at least one first image associated with the first geographic location is displayed in the content display interface.
21. The apparatus of claim 20, further comprising: a first output module for outputting a first output signal,
the input module is further used for receiving a second instruction triggered by the user operating the content display interface;
the first output module is configured to send the first geographic location and at least one first image associated with the first geographic location to a network device according to the second instruction, so that the network device shares the at least one first image.
22. The apparatus of claim 21, wherein the marking module is further configured to replace the first mark with a second mark, the second mark being used to characterize that the first image is shared.
23. The apparatus of claim 17, further comprising: a second output module;
the input module is also used for receiving a third instruction triggered by the operation of a preset user interface by a user;
and the second output module sends the acquired first images meeting the preset sharing condition and the first geographical positions associated with the first images to network equipment according to the third instruction, so that the network equipment can share the first images.
24. The apparatus of claim 17, further comprising: a third display module for displaying the image of the first display module,
the input module is further used for receiving a fourth instruction triggered by the user operation of the displayed third mark;
and the third display module is used for displaying a content display interface on the display screen according to the fourth instruction, wherein a second image associated with the second geographic position is displayed in the content display interface.
25. The apparatus of claim 17, further comprising: the shooting module and the fourth output module;
the shooting module is used for determining that the shooting function of the camera equipment needs to be started;
the fourth output module is used for sending a fifth instruction to the image pickup equipment to instruct the image pickup equipment to carry out shooting;
the input module is specifically configured to acquire a first image obtained by shooting a current scene by the camera device.
26. The apparatus according to claim 25, characterized in that the camera module is in particular adapted to,
receiving sensing data sent by sensing equipment, and determining that a shooting function of the camera equipment needs to be started according to the sensing data; or
Receiving shooting information sent by a server, wherein the shooting information comprises a geographical position to be shot, and when the geographical position of a vehicle is the geographical position to be shot, determining that the shooting function of the camera equipment needs to be started; or
Receiving a voice signal input by a user, and determining that a shooting function of the camera equipment needs to be started according to the voice signal; or
Receiving a third image, carrying out image analysis on the third image, determining that the current scene is a preset scene according to an image analysis result, and determining that the shooting function of the camera equipment needs to be started; or
And receiving a control signal triggered by a user through hardware equipment, and determining that the shooting function of the camera equipment needs to be started according to the control signal.
27. An apparatus for processing an image, comprising:
the input module is used for receiving a first geographical position sent by the terminal equipment;
the processing module is used for determining a first image, wherein the first image is an image shot by other terminal equipment in a first geographical range corresponding to the first geographical position;
the output module is used for sending the first image and a second geographic position associated with the first image to the terminal equipment, wherein the second geographic position is a shooting position of the first image;
before sending the second image and the second geographic location associated with the second image to the terminal device, the method further includes:
receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
28. The apparatus of claim 27, wherein the input module is further configured to receive the first image and a second geographic location associated with the first image sent by other terminal devices;
the processing module is further configured to associate the first image with the second geographic location.
29. The apparatus of claim 27, further comprising: a shooting module for shooting the images of the images,
the shooting module is used for determining a third geographic position meeting preset shooting conditions in a second geographic range corresponding to the first geographic position;
the output module is further configured to send shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that a shooting function of the camera device needs to be started when the vehicle is located at the third geographic location.
30. The apparatus according to claim 29, wherein the input module is further configured to receive a second image sent by another terminal device and a fourth geographic location associated with the second image, the second image is an image captured by the other terminal device in a second geographic range corresponding to the first geographic location, and the fourth geographic location is a capture location of the second image;
the shooting module is specifically configured to determine shot information of each fourth geographic location according to the second image sent by the other terminal device and a fourth geographic location associated with the second image; and determining a third geographical position meeting preset shooting conditions in the fourth geographical positions according to the shot information of the fourth geographical positions.
31. The apparatus of claim 30, wherein the captured information comprises: the frequency of shooting each fourth geographical position within a preset time period;
the preset shooting condition is that the frequency of being shot is greater than a preset value, or the frequency of being shot is before a preset ranking.
32. An apparatus for processing an image, comprising: an input device and a processor;
the input device is used for acquiring a first image corresponding to a current scene;
the processor is coupled to the input device and used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the processor is further configured to add a first marker to the first geographic location of the map, for indicating that the first geographic location is associated with the first image;
the input device is further used for receiving a second image sent by the server and a second geographic position associated with the second image;
the processor is further configured to add a third mark to the second geographic location of the map, for indicating that the second geographic location is associated with the second image;
the second geographic position is a geographic position selected by the user and sent to the server by the vehicle machine;
before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
33. The apparatus of claim 32, further comprising: a display screen coupled to the processor, the processor further configured to control the display screen to display at least one first image at a preset location of the displayed map interface.
34. The apparatus of claim 32, further comprising: a display screen coupled to the processor, the processor further to control the display screen to display the first marker at the first geographic location of the displayed map interface.
35. The device of claim 34, wherein the input device is further configured to receive a first instruction triggered by a user operating the first marker;
the processor is further configured to control the display screen to display a content display interface on the displayed map interface according to the first instruction, where at least one first image associated with the first geographic location is displayed in the content display interface.
36. The apparatus of claim 35, further comprising: an output device coupled to the processor;
the input device is further used for receiving a second instruction triggered by the user operating the content display interface;
the processor is further configured to control the output device to send the first geographic location and at least one first image associated with the first geographic location to a network device according to the second instruction, so that the network device performs sharing of the at least one first image.
37. The apparatus of claim 36, wherein the processor is further configured to replace the first marker with a second marker, the second marker being used to characterize that the first image has been shared.
38. The apparatus of claim 32, further comprising: an output device coupled to the processor;
the input device is also used for receiving a third instruction triggered by the operation of a preset user interface by a user;
the processor is further configured to control the output device to send the acquired first images meeting the preset sharing condition and the first geographic location associated with each first image to a network device according to the third instruction, so that the network device shares each first image.
39. The apparatus of claim 32, further comprising: an output device coupled to the processor;
the processor is also used for determining that the shooting function of the camera equipment needs to be started;
the output device is used for sending a fifth instruction to the image pickup device to instruct the image pickup device to carry out shooting;
the input device is specifically configured to acquire a first image obtained by shooting a current scene by the image pickup device.
40. The apparatus of claim 39,
the input device is further used for receiving sensing data sent by the sensing device, and the processor is further used for determining that the shooting function of the camera shooting device needs to be started according to the sensing data; or
The input device is further used for receiving shooting information sent by the server, the shooting information comprises a geographical position to be shot, and the processor is further used for determining that the shooting function of the camera shooting device needs to be started when the geographical position of the vehicle is the geographical position to be shot; or
The input equipment is also used for receiving a voice signal input by a user, and the processor is also used for determining that the shooting function of the camera equipment needs to be started according to the voice signal; or
The input device is further used for receiving a third image, the processor is further used for carrying out image analysis on the third image, and if the current scene is determined to be a preset scene according to an image analysis result, the shooting function of the camera shooting device is determined to be started; or
The input device is further used for receiving a control signal triggered by a user through hardware equipment, and the processor is further used for determining that the shooting function of the camera shooting device needs to be started according to the control signal.
41. An apparatus for processing an image, comprising: an input device, a processor, and an output device;
the input device is used for receiving a first geographical position sent by the terminal device;
the processor is coupled to the input device and the output device and used for determining a first image, wherein the first image is an image shot by other terminal devices in a first geographic range corresponding to the first geographic position;
the output device is configured to send the first image and a second geographic position associated with the first image to the terminal device, where the second geographic position is a shooting position of the first image;
before sending the second image and the second geographic location associated with the second image to the terminal device, the method further includes:
receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Receiving geographical location information reported by the terminal equipment, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
42. The apparatus of claim 41,
the processor is further configured to determine a third geographic position meeting a preset shooting condition in a second geographic range corresponding to the first geographic position;
the output device is further configured to send shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that a shooting function of the camera device needs to be started when the vehicle is located at the third geographic location.
43. The apparatus of claim 42,
the input device is further configured to receive a second image sent by another terminal device and a fourth geographic position associated with the second image, where the second image is an image captured by the other terminal device in a second geographic range corresponding to the first geographic position, and the fourth geographic position is a capture position of the second image;
the processor is further configured to determine, according to the second image sent by the other terminal device and a fourth geographic location associated with the second image, captured information of each fourth geographic location, and determine, according to the captured information of each fourth geographic location, a third geographic location in the fourth geographic location that meets a preset capturing condition.
44. An apparatus for image processing of a vehicle, comprising: an onboard input device and an onboard processor;
the airborne input equipment is used for acquiring a first image corresponding to the current scene;
the onboard processor is coupled to the onboard input device and used for acquiring a first geographic position corresponding to the first image and associating the first image with the first geographic position;
the on-board processor is further configured to add a first marker to the first geographic location of the map, for indicating that the first geographic location is associated with the first image;
the airborne input device is further used for receiving a second image sent by the server and a second geographic position associated with the second image;
the onboard processor is further configured to add a third mark to the second geographic location of the map, for indicating that the second geographic location is associated with the second image;
the second geographic position is a geographic position selected by the user and sent to the server by the vehicle machine;
before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
45. The apparatus of claim 44, wherein the on-board input device comprises: at least one of a programmable interface for software, a transceiver, a device-oriented in-vehicle device interface, and a user-oriented in-vehicle user interface.
46. The apparatus of claim 44, further comprising: a vehicle-mounted display screen is arranged on the vehicle,
the on-board display screen is coupled to the on-board processor, which is further configured to control the on-board display screen to perform the method of claim 2 or 3.
47. The apparatus of claim 46, further comprising: an on-board output device for outputting a signal to a user,
the on-board output device, coupled to an on-board processor, for performing the method of claim 5 or 7 or 9 or 11.
48. The device of claim 45, wherein the user-oriented in-vehicle user interface comprises one or more of:
a console control key;
a steering wheel control button;
a voice input device;
a touch sensing device.
49. The apparatus of claim 44, wherein the on-board processor is further configured to perform the method of any of claims 2 to 11.
50. A user interface system, comprising:
the display component is used for displaying a map interface;
the processor is used for triggering the display component to display a first mark on a first geographic position of the map interface and is used for representing that the first geographic position is associated with a first image;
the processor is further used for triggering the display assembly to add a third mark on a second geographic position of the map interface, and is used for representing that a second image is associated with the second geographic position;
the processor is further configured to report geographical location information to the server, where the geographical location information includes: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
51. The system of claim 50, wherein the processor is further configured to trigger the display component to display a first image associated with the first geographic location at a preset location of the map interface.
52. The system of claim 50, wherein the processor is further configured to trigger the display component to display a second mark on a second geographic location of the map interface, the second mark being indicative that a second image associated with the second geographic location has been shared.
53. The system of claim 50, wherein the processor is further configured to trigger the display component to display a third marker at a third geographic location of the map interface, the third marker being indicative that the third geographic location is associated with a second image shared by another user.
54. The system of any one of claims 50 to 53, wherein the processor is further configured to trigger the display component to display a content display interface on the map interface based on a user operation, the content display interface displaying an image corresponding to each of the markers.
55. The system of claim 54, wherein the processor is further configured to trigger the display component to display a user interface on the content display interface, the user interface configured to enable a user to trigger various instructions.
56. An in-vehicle internet operating system, comprising:
the image control unit is used for controlling the vehicle-mounted input equipment to acquire a first image corresponding to the current scene;
the association control unit is used for acquiring a first geographical position corresponding to the first image, acquiring a map added with a first mark at the first geographical position and used for representing that the first geographical position is associated with the first image, wherein the map added with the first mark is obtained by adding the first mark at the first geographical position of the original map;
the image control unit receives a second image sent by the server;
the association control unit is used for receiving a second geographic position which is associated with the second image and sent by the server, adding a third mark on the second geographic position of the map and representing that the second image is associated with the second geographic position;
the second geographic position is a geographic position selected by the user and sent to the server by the vehicle machine;
before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
reporting geographical location information to the server, wherein the geographical location information comprises: a third geographic position, wherein the third geographic position is used for enabling the server to determine a second geographic range corresponding to the third geographic position;
correspondingly, the second image comprises an image shot by other terminal equipment in a second geographic range corresponding to the third geographic position, and the second geographic position is the shooting position of the second image;
or
Reporting geographical location information to the server, wherein the geographical location information comprises: a second geographic range;
correspondingly, the second image comprises images shot by other terminal equipment in the second geographic range, and the second geographic position is the shooting position of the second image.
CN201610251255.4A 2016-04-21 2016-04-21 Image processing method, device and equipment and user interface system Active CN107305561B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201610251255.4A CN107305561B (en) 2016-04-21 2016-04-21 Image processing method, device and equipment and user interface system
PCT/CN2017/080545 WO2017181910A1 (en) 2016-04-21 2017-04-14 Image processing method, apparatus, device and user interface system
TW106112956A TW201741630A (en) 2016-04-21 2017-04-18 Image processing method, apparatus, device and user interface system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610251255.4A CN107305561B (en) 2016-04-21 2016-04-21 Image processing method, device and equipment and user interface system

Publications (2)

Publication Number Publication Date
CN107305561A CN107305561A (en) 2017-10-31
CN107305561B true CN107305561B (en) 2021-02-02

Family

ID=60115635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610251255.4A Active CN107305561B (en) 2016-04-21 2016-04-21 Image processing method, device and equipment and user interface system

Country Status (3)

Country Link
CN (1) CN107305561B (en)
TW (1) TW201741630A (en)
WO (1) WO2017181910A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7028608B2 (en) * 2017-11-06 2022-03-02 トヨタ自動車株式会社 Information processing equipment, information processing methods, and programs
CN110956716A (en) * 2018-09-27 2020-04-03 上海博泰悦臻网络技术服务有限公司 Vehicle-based image acquisition method, transmission method, device, vehicle, system and medium
CN112700658B (en) * 2019-10-22 2023-02-03 奥迪股份公司 System for image sharing of a vehicle, corresponding method and storage medium
CN113627419A (en) * 2020-05-08 2021-11-09 百度在线网络技术(北京)有限公司 Interest region evaluation method, device, equipment and medium
CN112328924B (en) * 2020-10-27 2023-08-01 青岛以萨数据技术有限公司 Method, electronic equipment, medium and system for realizing picture viewer by web side
CN113507614A (en) * 2021-06-23 2021-10-15 青岛海信移动通信技术股份有限公司 Video playing progress adjusting method and display equipment
CN117041627B (en) * 2023-09-25 2024-03-19 宁波均联智行科技股份有限公司 Vlog video generation method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103800A (en) * 2009-12-17 2011-06-22 富士通天株式会社 Navigation apparatus, vehicle-mounted display system and map displaying method
CN102829788A (en) * 2012-08-27 2012-12-19 北京百度网讯科技有限公司 Live action navigation method and live action navigation device
CN103700254A (en) * 2013-12-31 2014-04-02 同济大学 Position service-based road condition sharing system
CN104904200A (en) * 2012-09-10 2015-09-09 广稹阿马斯公司 Multi-dimensional data capture of an environment using plural devices
CN105261081A (en) * 2015-11-05 2016-01-20 浙江吉利汽车研究院有限公司 Work recording device of vehicle safety system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046802A (en) * 2006-03-31 2007-10-03 马飞涛 Geographic picture searching method
US20110184953A1 (en) * 2010-01-26 2011-07-28 Dhiraj Joshi On-location recommendation for photo composition
CN101867730B (en) * 2010-06-09 2011-11-16 马明 Multimedia integration method based on user trajectory
US9179021B2 (en) * 2012-04-25 2015-11-03 Microsoft Technology Licensing, Llc Proximity and connection based photo sharing
CN104111930A (en) * 2013-04-17 2014-10-22 刘红超 Image file processing system
CN106331383B (en) * 2016-11-21 2020-08-28 努比亚技术有限公司 Image storage method and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103800A (en) * 2009-12-17 2011-06-22 富士通天株式会社 Navigation apparatus, vehicle-mounted display system and map displaying method
CN102829788A (en) * 2012-08-27 2012-12-19 北京百度网讯科技有限公司 Live action navigation method and live action navigation device
CN104904200A (en) * 2012-09-10 2015-09-09 广稹阿马斯公司 Multi-dimensional data capture of an environment using plural devices
CN103700254A (en) * 2013-12-31 2014-04-02 同济大学 Position service-based road condition sharing system
CN105261081A (en) * 2015-11-05 2016-01-20 浙江吉利汽车研究院有限公司 Work recording device of vehicle safety system

Also Published As

Publication number Publication date
TW201741630A (en) 2017-12-01
WO2017181910A1 (en) 2017-10-26
CN107305561A (en) 2017-10-31

Similar Documents

Publication Publication Date Title
CN107305561B (en) Image processing method, device and equipment and user interface system
US11594131B2 (en) Method and apparatus for enhancing driver situational awareness
KR102087073B1 (en) Image-processing Apparatus for Car and Method of Sharing Data Using The Same
JP5676147B2 (en) In-vehicle display device, display method, and information display system
DE102018113258A1 (en) VEHICLE LOCATION AND GUIDANCE
CN105469102A (en) Vehicle driving information recording method and vehicle driving information recording device
JP6732677B2 (en) Video collection system, video collection device, and video collection method
KR20150085009A (en) Intra-vehicular mobile device management
US10757315B2 (en) Vehicle imaging support device, method, and program storage medium
CN107305704A (en) Processing method, device and the terminal device of image
CN107306345A (en) Traveling record processing method, device, equipment, operating system and the vehicles
JP2019194859A (en) Electronic devices, information systems, and program
CN112269939A (en) Scene search method, device, terminal, server and medium for automatic driving
CN112818240A (en) Comment information display method, comment information display device, comment information display equipment and computer-readable storage medium
KR101950593B1 (en) Audio video navigation apparatus and vehicle video monitoring system and method for utilizing user interface of the audio video navigation apparatus
CN111475233B (en) Information acquisition method, graphic code generation method and device
CN113965726A (en) Method, device and system for processing traffic video
CN110717386A (en) Method and device for tracking affair-related object, electronic equipment and non-transitory storage medium
CN202749090U (en) GPS (Global Positioning System) multifunctional audio-video system structure
KR20150124055A (en) Method for sharing contents using terminal for vehicle and apparatus thereof
CN115600243A (en) Data processing method and device based on vehicle, electronic equipment and storage medium
CN115437586A (en) Method, device, equipment and medium for displaying message on electronic map
CN114228742A (en) Method, device and equipment for outputting reliability of automatic driving system and storage medium
CN117171780A (en) Privacy protection method and related device
CN116108118A (en) Method and terminal equipment for generating thermal map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant