WO2017181910A1 - 图像的处理方法、装置、设备及用户界面系统 - Google Patents

图像的处理方法、装置、设备及用户界面系统 Download PDF

Info

Publication number
WO2017181910A1
WO2017181910A1 PCT/CN2017/080545 CN2017080545W WO2017181910A1 WO 2017181910 A1 WO2017181910 A1 WO 2017181910A1 CN 2017080545 W CN2017080545 W CN 2017080545W WO 2017181910 A1 WO2017181910 A1 WO 2017181910A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
geographic location
shooting
geographic
user
Prior art date
Application number
PCT/CN2017/080545
Other languages
English (en)
French (fr)
Inventor
胡蓉
史徐华
Original Assignee
斑马网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 斑马网络技术有限公司 filed Critical 斑马网络技术有限公司
Publication of WO2017181910A1 publication Critical patent/WO2017181910A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Definitions

  • the present application relates to Internet technologies, and in particular, to an image processing method, apparatus, device, and user interface system.
  • the driving recorder is an instrument that records information such as images and sounds while the vehicle is in motion. It not only provides evidence for traffic accidents, but also records the scenery of the user's journey. As a result, more and more users are using the driving recorder while the vehicle is in motion.
  • the driving recorder generally adopts cyclic shooting and/or interval shooting to shoot, and therefore, the user does not know which geographic location the driving recorder has photographed and stored.
  • the user finds an interesting thing in a geographical location on the road, if it is necessary to know whether the driving recorder has taken the geographic location, the user needs to perform a screen playback of the image stored by the driving recorder, and then the user browses back.
  • the picture is broadcast to determine if the driving recorder has taken the location. Specifically, the user needs to first turn on the driving recorder, then enable the playback function of the driving recorder, and perform a fast forward operation or a backward operation on the playback screen to determine whether the driving recorder has taken the geographic location.
  • the present application provides an image processing method, apparatus, device, and user interface system to solve the problem that the user has high operation complexity when the user is sure that a geographical location is taken.
  • the application provides a method for processing an image, including:
  • the method further includes:
  • At least one first image is displayed at a preset position of the map interface displayed on the display screen.
  • the method further includes:
  • the first indicia is displayed on the first geographic location of the map interface displayed on the display.
  • the method further includes:
  • the present application can display the first image at the preset position or display the first image on the content display interface after receiving the first instruction, and the user can quickly view the first geographical location associated with the user without performing a cumbersome operation. At least one first image. Further, since the user can view the first image in time, the user can also perform a deletion operation on the first image that the user does not need, thereby saving storage space.
  • the method further includes:
  • the application receives the second instruction triggered by the user operation content display interface, that is, the user only needs to perform simple interaction with the display screen, and can share the first image to other users, thereby reducing the complexity of the user operation.
  • the method further includes:
  • the first indicia is replaced with a second indicia for characterizing that the first image has been shared.
  • the method further includes:
  • the first image that meets a preset sharing condition includes:
  • the shooting location is at least one first image of the first geographic extent.
  • the method further includes:
  • the method before the second image sent by the receiving server and the second geographic location associated with the second image, the method further includes:
  • the geographic location information includes: a second geographic location
  • the second image is an image captured by another terminal device in the second geographic location
  • the geographic location information includes: a third geographic location, where the third geographic location is used to enable the server to determine a second geographic extent corresponding to the third geographic location;
  • the second image is an image captured by another terminal device in a second geographic range corresponding to the third geographic location, and the second geographic location is a shooting location of the second image;
  • the geographic location information includes: a second geographic range
  • the second image is an image captured by another terminal device in the second geographical range, and the second geographic location is a shooting position of the second image
  • the method further includes:
  • a content display interface is displayed on the display screen, and at least one second image associated with the second geographic location is displayed in the content display interface.
  • the application receives a second image sent by the server and a second geographic location associated with the second image; adding a third marker on the second geographic location of the map for characterizing the second geographic location associated with the second image, such that the user You can find out what interesting places in the second geographical location, and provide a reference for users' travel and travel, so that users can not miss the scenery and interesting things during the trip.
  • the method before the acquiring the first image corresponding to the current scene, the method further includes:
  • the acquiring the first image corresponding to the current scene includes:
  • the determining needs to activate a photographing function of the image capturing apparatus, including:
  • Receiving a third image performing image analysis on the third image, determining, according to the image analysis result, that the current scene is a preset scene, determining that a shooting function of the imaging device needs to be activated; or
  • the application can ensure that the imaging device can record the current scene in time when the vehicle is subjected to collision or severe vibration, and provides an event such as a traffic accident encountered by the vehicle. evidence.
  • the shooting function of the camera device is determined by means of voice, and the user does not need to perform the operation by hand when acquiring the image required by the user, thereby liberating the user's hands, so that the user can concentrate on driving and improve the safety of driving. .
  • the present application can obtain the shooting position from the server or obtain the shooting position through image analysis, and can actively record the scenery along the way for the user without disturbing the attention of the owner.
  • the user does not need to perform the shooting operation on the scenery along the way, and can obtain valuable moments along the way. And the scenery.
  • the present application provides a method for processing an image, including:
  • the first image being an image captured by another terminal device in a first geographic range corresponding to the first geographic location
  • the present application determines the first image captured by the other terminal device in the first geographic range corresponding to the first geographic location, and sends the first image and the second geographic location of the first image to the terminal device, so that the The user of the terminal device can conveniently and quickly know the first image shared by the users around the current geographic location and the second geographic location associated with the first image.
  • the first image and the second geographic location can provide reference for the user's travel or play in time, thereby improving the interest of the user.
  • the method further includes:
  • the first image is associated with the second geographic location.
  • the method further includes:
  • the shooting information including the third geographic location, the shooting information being used to indicate that a shooting function of the imaging device needs to be activated when the vehicle is located in the third geographic location.
  • the third geographic location that meets the preset shooting condition is determined in the second geographic range corresponding to the first geographic location, and the shooting information including the third geographic location having the shooting value is sent to the terminal device, so that the terminal The device can be shot automatically, without missing a good moment.
  • the method before determining the third geographic location that meets the preset shooting condition in the second geographic range corresponding to the first geographic location, the method further includes:
  • Determining, in a second geographic range corresponding to the first geographic location, a third geographic location that meets a preset shooting condition including:
  • the captured information includes: the captured information includes: a frequency at which each of the fourth geographic locations is captured within a preset time period;
  • the preset shooting condition is that the frequency of being photographed is greater than a preset value, or the frequency of being photographed is before a preset rank.
  • the comparison may be obtained.
  • the present application provides an image processing apparatus, and the functions of the apparatus may be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the device includes:
  • An input module configured to acquire a first image corresponding to the current scene
  • An association module configured to acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location
  • a marking module configured to add a first mark on the first geographic location of the map, to indicate that the first geographic location is associated with the first image.
  • the present application provides an image processing apparatus, and the functions of the apparatus may be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the device includes:
  • An input module configured to receive a first geographic location sent by the terminal device
  • a processing module configured to determine a first image, where the first image is an image captured by another terminal device in a first geographic range corresponding to the first geographic location;
  • an output module configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
  • the application provides an image processing apparatus, including: an input device and a processor;
  • the input device is configured to acquire a first image corresponding to the current scene
  • the processor is coupled to the input device, configured to acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location;
  • the processor is further configured to add a first mark on the first geographic location of the map, to indicate that the first geographic location is associated with the first image.
  • the application provides an image processing apparatus, including: an input device, a processor, and an output device;
  • the input device is configured to receive a first geographic location sent by the terminal device
  • the processor is coupled to the input device and the output device, and configured to determine a first image, where the first image is an image captured by another terminal device in a first geographic range corresponding to the first geographic location ;
  • the output device is configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
  • the application provides an apparatus for image processing of a vehicle, comprising: an onboard input device and an onboard processor;
  • the onboard input device is configured to acquire a first image corresponding to the current scene
  • the onboard processor is coupled to the onboard input device, configured to acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location;
  • the onboard processor is further configured to add a first mark on the first geographic location of the map for characterizing that the first geographic location is associated with the first image.
  • the application provides a user interface system, including:
  • Display component for displaying a map interface
  • a processor configured to trigger the display component to display a first mark on a first geographic location of the map interface, to indicate that the first geographic location is associated with the first image.
  • the application provides an in-vehicle Internet operating system, including:
  • An image control unit that controls the in-vehicle input device to acquire a first image corresponding to the current scene
  • the association control unit acquires a first geographic location corresponding to the first image, and obtains a map with a first identifier added to the first geographic location, where the first geographic location is associated with the first image, where The map to which the first mark is added is obtained by adding the first mark on the first geographical position of the original map.
  • the image processing method, device, device, and user interface system provided by the present application acquire the first geographic location corresponding to the first image after obtaining the first image corresponding to the current scene, that is, the specific shooting of the first image is determined. position.
  • the first image is then associated with the first geographic location, so that the user can quickly view the first image associated with the first geographic location when viewing the first geographic location.
  • the first identifier is added to the first geographic location of the map, and the first geographic location is associated with the first image corresponding to the current scene, so that the user can view the first
  • a mark directly knows that the camera device has taken the first geographical location, thereby avoiding cumbersome operations by the user and reducing the complexity of the user operation.
  • the present application provides a specific shooting position, the user does not need to identify the specific shooting position according to the building or road in the image, which improves the efficiency of the user determining the first geographic location.
  • FIG. 1 is a schematic diagram of an optional networking manner of the present application
  • FIG. 2 is a schematic flowchart of a method for processing an image according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of signaling processing of an image processing method according to an embodiment of the present disclosure.
  • FIG. 12 is a signaling flowchart of a method for processing an image according to an embodiment of the present disclosure
  • FIG. 13 is a signaling flowchart of a method for processing an image according to an embodiment of the present disclosure
  • FIG. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 19 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 20 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 21 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 22 is a schematic structural diagram of a user interface system according to an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of an in-vehicle Internet operating system according to an embodiment of the present application.
  • the present application provides a method for processing an image, which can be applied to the field of vehicle driving.
  • the vehicle involved in the embodiment of the present application may be any vehicle, and may also be other vehicles having corresponding control functions. .
  • the vehicle may be a single oil road vehicle, a single steam road vehicle, a fuel-air combined vehicle, or a power-assisted electric vehicle.
  • the vehicle has a corresponding onboard system.
  • the imaging device can be controlled to perform shooting in time, and an image captured by the imaging device is acquired. After the image is acquired, the image and the geographic location at which the image was captured are stored in association. Then, a mark is added to the geographical location on the map interface, so that the user can intuitively and quickly learn through the mark that the camera device has taken the geographic location without any operation.
  • an image associated with the geographic location may also be displayed on the map interface displayed by the display screen, so that the user can view the image captured by the imaging device at the geographic location.
  • an instruction for the user to operate the display screen triggering may be acquired, and the image is sent to the network device according to the instruction, so that the image can be quickly and conveniently Share to other users.
  • the execution body of the embodiment may be an image processing device, and the device may be implemented by hardware, or may be implemented by hardware, or may be implemented in an infrastructure of a vehicle or a terminal device.
  • the apparatus can be implemented in an infrastructure of a terminal device including a mobile terminal, an in-vehicle device, and the like.
  • the mobile terminal can be, for example, a mobile phone or a tablet
  • the in-vehicle device can be, for example, a driving recorder, a vehicle, a center console, a navigation device, and the like.
  • the apparatus can be implemented in the infrastructure of the server.
  • the vehicle is used as a vehicle
  • the processing device of the image is implemented as a vehicle or a server, that is, the vehicle or the server is used as the main body, and the networking mode and the specific implementation manner of the present application are described in detail.
  • FIG. 1 is a schematic diagram of an optional networking manner of the present application.
  • the method for processing an image provided by the present application can be implemented by the networking.
  • the network includes: a vehicle 101, an imaging device 102, and a positioning module 103.
  • the vehicle 101 refers to an in-vehicle infotainment product installed in the vehicle.
  • the car machine 101 is mostly installed in the center console of the car, and the host computer of the car machine 101 can be integrated with the display screen or can be separated from the display screen.
  • the vehicle 101 is functionally capable of realizing information communication between the person and the car, and the vehicle and the outside world.
  • the display of the car can display the navigation path, driving path and so on.
  • the imaging device 102 may be a camera disposed at any position of the vehicle, or may be a driving recorder, or may be a terminal device such as a mobile phone or a tablet having an imaging function, that is, the imaging device 102 is any device having an imaging function. After receiving the instruction for instructing shooting sent by the vehicle 101, the imaging device 102 captures the current scene and transmits the captured first image to the vehicle 101.
  • the positioning module 103 can be a Global Positioning System (GPS) or a BeiDou Navigation Satellite System (BDS).
  • GPS Global Positioning System
  • BDS BeiDou Navigation Satellite System
  • the positioning module 103 can be provided by the vehicle or can be provided by other external devices.
  • the positioning module 103 is used to provide position information to the vehicle 101.
  • the vehicle 101 After acquiring the first image captured by the imaging device 102, the vehicle 101 acquires the first geographic location corresponding to the current scene from the positioning module 103. The vehicle 101 combines the first geographic location to process the first image. For example, the first image is associated with the first geographic location, and then a first indicia is added to the first geographic location of the map for characterizing that the first geographic location is associated with the first image. Other processing procedures for the first image by the vehicle 101 will be described in detail in the following embodiments.
  • the sensing device 104 may further be included in the networking shown in FIG.
  • the sensing device 104 can transmit sensing data to the vehicle 101 in real time.
  • the vehicle 101 can determine the shooting function that needs to activate the imaging device based on the sensing data.
  • a person skilled in the art can understand that the manner in which the above-mentioned vehicle starts the photographing function is only a feasible implementation manner when the vehicle 101 determines that the photographing function of the imaging device needs to be activated. For other feasible implementations, in the following embodiments. A detailed description will be made.
  • the network shown in FIG. 1 may further include a server 105.
  • the server 105 can receive the first image transmitted by the vehicle 101 and share the first image with other users. Similarly, the server 105 can also send a second image shared by other users to the vehicle 101.
  • the networking is only an exemplary networking.
  • the physical devices in the network can also be replaced by other devices.
  • the sensing device 104 can also be other detecting devices as long as the detecting device can transmit data to the vehicle 101 that can characterize that the vehicle is subject to a collision or a violent vibration of the vehicle.
  • the possible implementation manners of the networking are not described herein again.
  • the method for processing an image provided by the present application will be described in detail below by taking the networking shown in FIG. 1 as an example.
  • FIG. 2 is a schematic flowchart diagram of a method for processing an image according to an embodiment of the present disclosure. As shown in Figure 2, the process includes:
  • Step 201 Acquire a first image corresponding to the current scene.
  • the vehicle acquires a first image obtained by the imaging device capturing the current scene.
  • the implementation of the camera device can be Refer to the embodiment shown in FIG. 1.
  • the first image includes a photo and/or video.
  • the user may preset the specific content included in the first image. For example, the user presets to set the imaging device to simultaneously take photos and videos.
  • the vehicle can acquire the first image according to a preset period. Specifically, the vehicle can obtain a preset period input by the user, and then the vehicle sends the preset period to the camera device, and the camera device sends the first image of the current scene in one cycle to the vehicle according to the preset period. . For example, if the preset period is 5 minutes, the imaging apparatus can transmit the first image captured by the imaging apparatus in the past 5 minutes to the vehicle every 5 minutes.
  • the vehicle after determining that the shooting function of the imaging device needs to be activated, the vehicle sends a fifth command to the imaging device to instruct the imaging device to perform shooting.
  • the fifth command is used to instruct the imaging device to perform shooting.
  • the fifth instruction is referred to as a shooting instruction.
  • the vehicle machine receives the first image obtained by the imaging device capturing the current scene.
  • the vehicle sends a shooting instruction to the imaging device, and the shooting instruction includes a manner of shooting, such as taking a photo or taking a video, or simultaneously taking a photo and a video.
  • the shooting instruction may further include a shooting duration. Alternatively, the shooting duration can also be set in advance.
  • the imaging device captures the current scene according to the shooting instruction, and transmits the captured first image to the vehicle.
  • the photographing function in the embodiment may be a snap function of the image capturing device.
  • the fifth command is used to instruct the image capturing device to perform a snapping, and after receiving the fifth command, the camera device receives the fifth command. Take a snapshot of the current scene.
  • the imaging device can perform normal shooting according to the mode set by itself, and store the first image according to its own setting.
  • determining the shooting function that needs to start the imaging device includes the following feasible implementations.
  • a feasible implementation manner is to receive sensing data sent by the sensing device, and determine a shooting function that needs to start the imaging device according to the sensing data.
  • the sensing device may be an acceleration sensor or a gravity sensor or the like.
  • the sensing device may be a sensing device built in the vehicle, and the sensing device built in the vehicle has higher sensitivity than the vehicle sensing device provided in other manners, and can improve the authenticity of the detection event.
  • the vehicle can obtain the sensing data sent by the acceleration sensor or the gravity sensor in real time, and monitor the sensing data. When the vehicle detects that the sensing data is abnormal, it determines that the vehicle has an emergency. Start the shooting function of the camera.
  • the imaging device when the sensing data is abnormal, it is determined that the shooting function of the imaging device needs to be activated, and the imaging device can timely record the current scene when the vehicle is subjected to collision or severe vibration, and provides an event such as a traffic accident encountered by the vehicle. evidence.
  • Yet another feasible implementation manner is to receive a voice signal input by a user, and determine a shooting function that needs to start the imaging device according to the voice signal.
  • the shooting function of the startup imaging device can also be triggered by the user.
  • the wake-up word of the voice is “zebra”.
  • the car machine receives the voice signal of “Zebra Snapshot” input by the user, it is determined according to the voice signal that the shooting function of the camera device needs to be activated.
  • the shooting function of the camera device is determined by means of voice, and the user does not need to perform the operation by hand when acquiring the image required by the user, thereby liberating the user's hands, so that the user can concentrate on driving and improve the safety of driving. .
  • the vehicle machine receives the shooting information sent by the server, and the shooting information includes the geographic location to be photographed, and when the geographical location of the vehicle is the geographical location to be photographed, it is determined that the shooting function of the imaging device needs to be activated. .
  • the vehicle can send the geographical location of the vehicle to the server in real time, and the server determines whether there is a geographical location to be photographed near the geographical location where the vehicle is located, and if so, Send the position to be photographed to the vehicle.
  • the server can obtain a beautiful geographical location through the Internet.
  • the server determines that there is a geographical location to be photographed near the geographical location of the vehicle (a beautiful scenery) Geographic location)
  • the server sends shooting information to the vehicle, and the shooting information includes the geographic location to be photographed.
  • the vehicle acquires the geographical location of the vehicle in real time.
  • the geographical location of the vehicle is the geographical position to be photographed, it is determined that the photographing function of the imaging device needs to be activated.
  • the third image sent by the image capturing device is received, the image is analyzed on the third image, and the current scene is determined as a preset scene according to the image analysis result, and then the shooting function of the image capturing device needs to be activated.
  • the vehicle does not have any limitation on the manner in which the camera device transmits the third image.
  • the vehicle After acquiring the third image, the vehicle performs color analysis on the third image to obtain color information of the third image, where the color information includes a type of color and an area ratio of each color, and determines whether the current scene is based on the color information. For the preset scene, if yes, it is determined that the shooting function of the imaging device needs to be activated.
  • the types of colors corresponding to the third image include red, green, yellow, brown, and gray. The area occupied by red is 25%, the area occupied by green is 30%, the area occupied by yellow is 25%, the area occupied by brown is 10%, and the area occupied by gray is 10%.
  • the car machine determines, according to the color information, that the current scene is a landscape scene, and is a preset scene, and determines that a shooting function of the camera device needs to be activated.
  • the preset scenario in this embodiment is not limited to a landscape scenario, and may be an architectural scenario that satisfies a certain architectural feature.
  • the specific implementation manner of the preset scenario is not particularly limited herein.
  • the above two feasible implementations can be obtained by taking the shooting position from the server or by image analysis.
  • the shooting position can actively record the scenery along the way for the user, without disturbing the attention of the owner.
  • the user does not need to perform the shooting operation on the scenery along the way, and can obtain valuable moments and scenery along the way.
  • Yet another feasible implementation manner is to receive a control signal triggered by the user through the hardware device, and determine, according to the control signal, that a shooting function of the imaging device needs to be activated.
  • the hardware device may be a steering wheel of a vehicle, a touch screen of a vehicle, a center console of a vehicle, or the like.
  • the user operates the hardware devices to trigger a control signal.
  • the hardware device as a steering wheel as an example, the steering wheel of the vehicle and the vehicle can be connected by wire or wirelessly.
  • a preset button is disposed on the steering wheel. When the user presses the preset button, the vehicle machine receives a control signal triggered by the user through the steering wheel, and then determines, according to the control signal, a shooting function that needs to be activated.
  • the control signal is triggered by the steering wheel, and the user operation is convenient and quick.
  • Step 202 Acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location.
  • the vehicle After the vehicle acquires the first image captured by the current scene, the vehicle acquires the first geographic location corresponding to the first image, that is, acquires the shooting position of the first image. For example, the vehicle sends a location acquisition request to the positioning module provided in the vehicle, and the vehicle receives the first geographic location corresponding to the first image returned by the positioning module.
  • the positioning module provides the first geographic location of the current scene to the vehicle in real time, and the vehicle displays the driving path in real time on the display screen through the first geographic location, and the vehicle can obtain the corresponding image according to the current driving path.
  • First location Those skilled in the art can understand that the vehicle can also obtain the first geographic location corresponding to the first image by other means. For example, the vehicle can interact with other terminal devices to obtain a first geographic location corresponding to the first image. This embodiment does not specifically limit the specific manner of obtaining the first geographic location.
  • the vehicle associates the first image with the first geographic location.
  • the first image may be associated with the first geographic location and stored in the memory, and then the mapping relationship between the first image and the first geographic location may be established, or the first image may be stored when the first image is stored.
  • the location attribute is added to the attribute, and the location attribute includes the first geographic location.
  • Step 203 Add a first mark on the first geographic location of the map, for indicating that the first geographic location is associated with the first image.
  • the vehicle After obtaining the first geographic location, the vehicle adds a first mark on the first geographic location of the map. Due to the first An image is associated with the first geographic location, and therefore, the operation of adding the first marker to the first geographic location may characterize that the first geographic location is associated with the first image.
  • the first mark may be directly displayed on the map interface on the display screen, or may be displayed when the user browses the map interface. A mark.
  • FIG. 3 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • the first mark is displayed on the first geographic location of the map interface displayed on the display screen. That is, the first geographic location is marked.
  • the first indicia can characterize that the first geographic location has been taken, or that the first geographic location is associated with a first image or the like. Further, in the following embodiments, the first mark may also be used to display the first image.
  • the driving path of the vehicle can also be displayed in real time on the map interface displayed on the display screen.
  • the vehicle displays the first marker on the first geographic location on the map interface corresponding to the driving path of the vehicle displayed on the display screen.
  • FIG. 4 is a schematic diagram of a state of a user interface according to an embodiment of the present application. As shown in FIG. 4, the vehicle displays the first mark on the first geographical position near the position where the arrow on the travel path of the vehicle is located. Those skilled in the art will appreciate that at other locations of the travel path, The geographical location marked is the mark added by the car in the first geographic location before this moment. When the user sees the first mark, it can know which geographic locations the camera device has taken, and can also know that the first image corresponding to the geographical locations is stored by the camera, and the subsequent user can view the first time at any time. image.
  • the method for processing an image obtains a first geographic location corresponding to the first image after obtaining the first image corresponding to the current scene, that is, determines a specific shooting location of the first image.
  • the first image is then associated with the first geographic location, so that the user can quickly view the first image associated with the first geographic location when viewing the first geographic location.
  • the first identifier is added to the first geographic location of the map, and the first geographic location is associated with the first image corresponding to the current scene, so that the user can view the first
  • a mark directly knows that the camera device has taken the first geographical location, thereby avoiding cumbersome operations by the user and reducing the complexity of the user operation.
  • the present application provides a specific shooting position, the user does not need to identify the specific shooting position according to the building or road in the image, which improves the efficiency of the user determining the first geographic location.
  • the present application also displays the first image, which may be implemented by the following feasible implementation manners.
  • a feasible implementation manner after acquiring the first image obtained by the camera device capturing the current scene, At least one first image is displayed at a preset position of the map interface displayed on the display screen. See Figure 5 for the specific implementation process.
  • FIG. 5 is a schematic diagram of a state of a user interface according to an embodiment of the present application.
  • a first mark is displayed, and at least one first image is also displayed in a lower right corner of the map interface.
  • the present embodiment can display all the first images on the map interface, and also display a part of the first image on the map interface, and indicate the total number of the first images on the map interface, and the current display. The number. When the currently displayed number is part of the first image, the user can obtain other first images by clicking or sliding the display screen.
  • the preset position in this embodiment may be not only the lower right corner but also the lower left corner, the upper left corner, the upper right corner, and the like. Alternatively, the preset position may also change with the travel path of the vehicle, ie the first image does not cover the travel path of the vehicle.
  • Another possible implementation manner after displaying the first mark on the first geographical position of the map interface displayed on the display screen, receiving a first instruction triggered by the user operating the first mark; according to the first instruction, on the display
  • a content display interface is displayed on the displayed map interface, and at least one first image associated with the first geographic location is displayed in the content display interface.
  • the first instruction is used to indicate that a content display interface is displayed on the map interface.
  • the first instruction may be, for example, a viewing instruction for viewing the first picture. See Figure 6 for the specific implementation process.
  • the embodiment may display all the first images on the content display interface, or display a partial first image on the content display interface, and indicate the total number of the first images on the content display interface. And the number of currently displayed. When the currently displayed number is part of the first image, the user can obtain other first images by clicking or sliding the display screen.
  • FIG. 6 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • the user when the user wants to view the first image associated with the first geographical location, the user marks the first mark The operation is performed to trigger the first instruction, which may be clicking the first mark, or long pressing the first mark, or sliding the first mark or the like.
  • the vehicle machine receives a first command triggered by the user operating the first flag. Then, the vehicle displays a content display interface in the middle of the map interface displayed on the display screen, and at least one first image associated with the first geographic location is displayed in the content display interface.
  • the content display interface may be located at a middle position of the map interface, or may be located at other positions of the map interface.
  • the specific location where the content display interface is located in this embodiment is not particularly limited.
  • the content display interface may further display the total number of photos and videos included in the shared first image, and the first geographic location, the shooting time, and the like.
  • the above two feasible implementations enable the user to quickly view the first one without performing cumbersome operations.
  • the first image associated with the geographic location. Further, since the user can view the first image in time, the user can also perform a deletion operation on the first image that is not required by the user, thereby saving storage space.
  • the network device can also be interacted to implement the sharing function.
  • the network device may be a mobile network device, such as an in-vehicle device, a mobile terminal, or the like, or may be a server, a computer, or the like. That is, the first image is shared to other network devices, and the second image shared by other network devices to the vehicle is acquired.
  • the sharing process of the present application will be described in detail below using specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in some embodiments.
  • the second instruction is for indicating to send the first geographic location and the at least one first image associated with the first geographic location to the network device.
  • the second instruction may be, for example, a sharing instruction for sharing the first image.
  • the vehicle machine can receive the second instruction triggered by the user on the content display interface in a plurality of manners.
  • the user can click or double-click any first image, and then the vehicle receives the second instruction triggered by the user to click or double-click on the content display interface, or, in order,
  • a window for prompting may also be displayed on the content display interface, and a "Yes" or "No” dialog is displayed on the window. Box, when the user clicks "Yes", the vehicle receives a second instruction triggered by the user on the content display interface.
  • FIG. 7 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • the user interface is also displayed on the content display interface, and the user interface may be, for example, a window, an icon, a dialog box, a hovering box, a button control, or the like.
  • the second instruction can be triggered.
  • a “drop pin” icon is set, and after the user clicks the “drop pin” icon, the car machine receives a second instruction triggered by the user on the content display interface.
  • the first geographic location and the first image associated with the first geographic location are transmitted to the network device.
  • a person skilled in the art can understand that after the user clicks “Drop Pushpin”, all the first images can be sent to the network device, or the user first selects at least one first image that needs to be shared, After the user clicks “Drop Push”, at least one first image selected by the user can be sent to the network device.
  • the application obtains the second instruction triggered by the user on the content display interface, that is, the user only needs to interact with the display screen to share the first image with other users, thereby reducing the complexity of the user operation.
  • the first mark is Replace with second marker Second mark Used to characterize that the first image has been shared.
  • the third instruction triggered by the user operating the preset user interface on the display screen is received; and the acquired first image that meets the preset sharing condition is associated with each first image according to the third instruction.
  • the first geographic location is sent to the network device, so that the network device performs sharing of the first images.
  • the third instruction is configured to send, to the network device, a first geographic location that associates the first image that meets the preset sharing condition with each first image.
  • the third instruction may be, for example, a sharing instruction for sharing a first picture that satisfies a preset condition.
  • the first image that satisfies the preset sharing condition includes the following feasible implementation manners.
  • a feasible implementation manner is that at least one first image associated with the first mark that has been displayed on the map interface displayed on the display screen.
  • the first interface is displayed on the map interface, and the user browses the geographic location that is of interest to the user. At this time, only a part of the map interface is displayed on the display screen, and if the user operates the preset user interface, the trigger is triggered. After the third command is received, the vehicle machine shares at least one first image associated with the first mark displayed on the map interface displayed on the display screen.
  • the vehicle machine shares at least one first image associated with the first mark displayed on the map interface displayed on the display screen.
  • all the first images associated with the first tag may be sent to the network device, or at least one first image may be sent to the network device.
  • the shooting location is at least one first image in the first geographic range.
  • the first geographic range may be preset by the user or may be defaulted by the system.
  • the vehicle can transmit all the first images of the shooting location in the first geographical range to the network device, or send the at least one first image to the network device.
  • FIG. 8 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • a button control is also displayed on the display.
  • the user clicks on the button control And receiving, by the vehicle, a third instruction triggered by the user, and then transmitting, according to the third instruction, the acquired first geographic location that is related to the preset sharing condition and the first geographic location associated with each of the first images to the network device.
  • the foregoing only schematically illustrates an implementation manner in which a preset sharing condition is feasible.
  • the preset sharing condition may also be other conditions.
  • the preset sharing condition may also be all unshared first images that have been acquired, or The first image or the like in which the shooting time satisfies the preset condition.
  • the first image that satisfies the preset sharing condition does not include the first image that has been shared. image.
  • the first mark is Replace with second marker Second mark Used to characterize that the first image has been shared.
  • the user can quickly and conveniently share the large number of first images to other users.
  • the second image sent by the receiving server and the second geographic location associated with the second image are added with a third mark on the second geographic location of the map for characterizing the second geographic location. Two images.
  • the vehicle further reports the geographic location information to the server before receiving the second image sent by the server and the second geographic location associated with the second image.
  • the implementation of the location information reported by the vehicle to the server is as follows:
  • a feasible implementation manner is to report the geographic location information to the server, where the geographic location information includes: a second geographic location; correspondingly, the second image is an image captured by the other terminal device in the second geographic location.
  • the second geographic location may be a geographic location where the vehicle is currently sent to the server by the vehicle, or may be a geographic location where the vehicle is currently sent to the server according to a preset period, or may be a vehicle.
  • the geographical location selected by the user to the server.
  • Another feasible implementation manner is to report the geographic location information to the server, where the geographic location information includes: a third geographic location, where the third geographic location is used to enable the server to determine a second geographic extent corresponding to the third geographic location; correspondingly, the second The image is an image taken by another terminal device in a second geographic range corresponding to the third geographic location, and the second geographic location is a shooting location of the second image.
  • the third geographic location may be a geographic location where the vehicle is currently located, or may be a geographic location selected by the user.
  • the server determines a second geographic range corresponding to the third geographic location, where the second geographic range may be preset by the user or may be defaulted by the vehicle.
  • the second geographic range may specifically be a third geographic location and an area in the vicinity thereof.
  • the second geographic area may be centered on the third geographic location, the preset distance is a radius, and the covered area; or the administrative area where the third geographical location is located, for example, the third geographic location is located in Shanghai. Huaihai East Road, the second geographical area is Shanghai Huangpu District.
  • the embodiment is not particularly limited herein. It should be noted that the second geographic location is located in the second geographic range, and the second geographic location is a shooting location of the second image, and the second geographic location may also be the same geographic location as the third geographic location.
  • the geographic location information is reported to the server, where the geographic location information includes: a second geographic range; correspondingly, the second image is an image captured by another terminal device in the second geographic range, and the second geographic location is The shooting position of the second image.
  • the second image and the second geographic location are obtained, the second image is associated with the second geographic location.
  • the specific association manner refer to the manner that the first image is associated with the first geographic location. I won't go into details here.
  • FIG. 9 is a schematic diagram of a state of a user interface according to an embodiment of the present application. As shown in FIG. 9, after adding the third mark, the vehicle displays the third mark on the second geographic location of the map interface. When the user sees the third mark, the user can know that other users share the second image in this geographical location.
  • the user can also view the second image.
  • the vehicle machine receives a fourth instruction triggered by the third operation of the user operation, and displays a content display interface on the display screen according to the fourth instruction, where at least one associated with the second geographic location is displayed in the content display interface.
  • the fourth instruction is used to indicate that a content display interface is displayed on the map interface.
  • the fourth instruction may be, for example, a view instruction for viewing the second picture.
  • the manner in which the user views the second image is similar to the manner in which the first image is viewed. This embodiment is not described herein again, and only one specific example is taken as an example.
  • FIG. 10 is a schematic diagram of a state change of a user interface according to an embodiment of the present application.
  • the user wants to view the second image associated with the second geographic location
  • the vehicle receives a fourth command triggered by the user clicking the third marker.
  • the vehicle displays a content display interface in the middle of the map interface displayed on the display screen, and at least one second image associated with the second geographic location is displayed in the content display interface.
  • the present application provides a reference for the user's travel and travel by marking the second geographic location shared by other users and displaying the second image associated with the second geographic location, so that the user can know what interesting places are in the second geographic location. This allows users to travel and enjoy the scenery and fun during the trip.
  • the processing device of the image is implemented to the server infrastructure, and the server is used as the execution subject, and the interaction between the server and the terminal device is explained from the perspective of the server to implement the sharing process.
  • the terminal device is a vehicle and the vehicle as an example.
  • FIG. 11 is a schematic diagram of a signaling flow of a method for processing an image according to an embodiment of the present disclosure. As shown in Figure 11, The process includes:
  • the vehicle sends the first geographic location to the server.
  • the first geographic location in the embodiment may be a first geographic location where the vehicle is sent to the server in real time, or may be a first geographic location where the vehicle is sent to the server according to a preset period. It may also be the first geographic location that the vehicle sends to the server that is of interest to the user.
  • the server determines a first image, where the first image is an image captured by another vehicle in a first geographic range corresponding to the first geographic location.
  • the determined first image may be the first geographic location of all other vehicles. All images captured in the first geographical range corresponding to the location may also be images captured by a portion of other vehicles in a first geographic range corresponding to the first geographic location, and may also correspond to at least one vehicle in the first geographic location. At least one image taken within the first geographic extent.
  • the embodiment may further include S10A and S10B, where S10A and S10B are specifically:
  • the other vehicle sends a first image to the server and a second geographic location associated with the first image.
  • the second geographic location is the shooting location of the first image, and the second geographic location is located within the first geographic extent.
  • the server associates the first image with the second geographic location.
  • the server sends a first image to the vehicle and a second geographic location associated with the first image.
  • the server sends the first image and the second geographic location to the vehicle, and the vehicle can associate the first image with the second geographic location.
  • the second geographic location is a shooting location of the first image.
  • the server of the embodiment determines that there is a first image captured by the other vehicle in the first geographic range corresponding to the first geographic location, and sends the first image and the second geographic location of the first image to the vehicle.
  • the user who uses the vehicle can use the vehicle to conveniently and quickly know the first image shared by the users around the current geographic location and the second geographic location associated with the first image.
  • the first image and the second geographic location can provide reference for the user's travel or play in time, thereby improving the interest of the user.
  • FIG. 12 is a signaling flowchart of a method for processing an image according to an embodiment of the present disclosure. The process includes:
  • the vehicle sends the first geographic location where the vehicle is located to the server.
  • the server determines, in a second geographic range corresponding to the first geographic location, a third geographic location that meets a preset shooting condition.
  • the second geographical range is similar to the first geographic range, and is not described herein again in this embodiment. It should be noted that the second geographic range may be the same as or different from the first geographic extent.
  • the preset shooting condition may be that the number of times of shooting or the frequency of the geographical location is higher than a preset value, or the geographic location may be in the Internet social community, the number of comments is more than a preset value, or the geographic location is a national level scenery. District, etc.
  • the server views each geographic location in the second geographic area to determine whether there is a third geographic location that meets the preset shooting conditions.
  • the server sends the shooting information to the vehicle.
  • the shooting information includes a third geographic location, where the shooting information is used to indicate that the shooting function of the imaging device needs to be activated when the vehicle is located in the third geographic location.
  • the third geographic location that meets the preset shooting condition is determined in the second geographic range corresponding to the first geographic location, and the shooting information including the third geographic location having the shooting value is sent to the terminal device, so that the terminal The device can be shot automatically, without missing a good moment.
  • a specific embodiment is taken as an example to illustrate how the third geographic location is determined by the present application.
  • FIG. 13 is a signaling flowchart of a method for processing an image according to an embodiment of the present disclosure. The process includes:
  • the vehicle sends the first geographic location where the vehicle is located to the server.
  • the server determines the captured information of each fourth geographic location according to the second image sent by the other vehicle and the fourth geographic location associated with the second image.
  • the second image is an image captured by another vehicle in a second geographic range corresponding to the first geographic location
  • the fourth geographic location is a shooting location of the second image
  • the other vehicle sends a second image to the server and a fourth geographic location associated with the second image.
  • the server associates the second image with the fourth geographic location.
  • the server determines, according to the captured information of each fourth geographic location, a third geographic location that meets a preset shooting condition in the fourth geographic location.
  • the captured information may specifically be a frequency or probability that the fourth geographic location is captured within a preset time period.
  • the preset shooting condition may specifically be that the frequency or probability of shooting is greater than a preset value, the frequency or probability of being photographed is ranked before the preset ranking, and the like.
  • the server sends shooting information to the vehicle, the shooting information includes a third geographic location, and the shooting information is used to indicate that when the vehicle is located in the third geographic location, the shooting function of the imaging device needs to be activated.
  • the server views each geographic location in the second geographic area to determine whether there is a third geographic location that meets the preset shooting conditions.
  • the comparison may be obtained.
  • a processing apparatus of an image will be described in detail below.
  • the processing means of these images may be implemented in the infrastructure of the vehicle or terminal device, or in an interactive system between the server and the client.
  • the processing devices for these images can be constructed using commercially available hardware components configured by the steps taught by the present scheme.
  • the processor component or processing module, processing unit
  • the processor component can use components such as a microcontroller, a microcontroller, a microprocessor, etc. from Texas Instruments, Intel Corporation, ARM, and the like.
  • FIG. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in Figure 14, the device includes:
  • the input module 1401 is configured to acquire a first image corresponding to the current scene
  • the association module 1402 is configured to acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location;
  • the marking module 1403 is configured to add a first mark on the first geographic location of the map for characterizing that the first geographic location is associated with the first image.
  • the processing device of the image provided in this embodiment may be used to perform the foregoing method embodiments.
  • the implementation principle and the technical effects are similar, and the details are not described herein again.
  • FIG. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. This embodiment is implemented on the basis of the embodiment of FIG. 14, and the details are as follows:
  • the method further includes: a first display module 1404, configured to display the at least one first image at a preset position of the map interface displayed by the display screen.
  • a first display module 1404 configured to display the at least one first image at a preset position of the map interface displayed by the display screen.
  • the method further includes: a second display module 1405, configured to display the first mark on the first geographic location of the map interface displayed by the display screen.
  • the input module 1401 is further configured to receive a first instruction triggered by the user to operate the first flag;
  • the second display module 1405 is further configured to: display, according to the first instruction, a content display interface on a map interface displayed on the display screen, where the content display interface is displayed with the first geographic location Associated with at least A first image.
  • the method further includes: a first output module 1406,
  • the input module 1401 is further configured to receive a second instruction triggered by the user to operate the content display interface
  • the first output module 1406 is configured to send, according to the second instruction, the first geographic location and at least one first image associated with the first geographic location to a network device, to enable the network device Performing at least one first image sharing.
  • the marking module 1403 is further configured to replace the first mark with a second mark, where the second mark is used to indicate that the first image has been shared.
  • the method further includes: a second output module 1407;
  • the input module 1401 is further configured to receive a third instruction triggered by a user to operate a preset user interface
  • the second output module 1407 is configured to send, according to the third instruction, the acquired first geographic location that is related to the preset sharing condition and the first geographic location that is associated with each of the first images to the network device, so that The network device performs sharing of each of the first images.
  • the input module 1401 is further configured to receive a second image sent by the server and a second geographic location associated with the second image;
  • the marking module 1403 is further configured to add a third mark on the second geographic location of the map for characterizing that the second geographic location is associated with the second image.
  • the method further includes: a third output module 1408,
  • the third output module 1408 is configured to:
  • the geographic location information includes: a second geographic location
  • the second image is an image captured by another terminal device in the second geographic location
  • the geographic location information includes: a third geographic location, where the third geographic location is used to enable the server to determine a second geographic extent corresponding to the third geographic location;
  • the second image is an image captured by another terminal device in a second geographic range corresponding to the third geographic location, and the second geographic location is a shooting location of the second image;
  • the geographic location information includes: a second geographic range
  • the second image is an image taken by the other terminal device in the second geographical range
  • the second geographic location is a shooting position of the second image
  • the method further includes: a third display module 1409,
  • the input module 1401 is further configured to receive a fourth instruction triggered by the third flag that has been displayed by a user operation;
  • the third display module 1409 is configured to display a content display interface on the display screen according to the fourth instruction, where the second image associated with the second geographic location is displayed in the content display interface.
  • the method further includes: a shooting module 1410 and a fourth output module 1411;
  • the photographing module 1410 is configured to determine a photographing function that needs to start the image capturing device
  • the fourth output module 1411 is configured to send a fifth instruction to the imaging device to instruct the imaging device to perform shooting;
  • the input module 1401 is specifically configured to: acquire a first image obtained by the imaging device capturing a current scene.
  • the shooting module 1410 is specifically configured to:
  • Receiving a third image performing image analysis on the third image, determining, according to the image analysis result, that the current scene is a preset scene, determining that a shooting function of the imaging device needs to be activated; or
  • the apparatus provided in this embodiment may be used to perform the foregoing method embodiments, and the implementation principles and technical effects are similar, and details are not described herein again.
  • FIG. 16 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in Figure 16, the device includes:
  • the input module 1601 is configured to receive a first geographic location sent by the terminal device
  • the processing module 1602 is configured to determine a first image, where the first image is an image captured by another terminal device in a first geographic range corresponding to the first geographic location;
  • An output module 1603, configured to send, to the terminal device, the first image and associated with the first image a second geographic location, the second geographic location being a location of the first image.
  • the processing device of the image provided in this embodiment may be used to perform the foregoing method embodiments.
  • the implementation principle and the technical effects are similar, and the details are not described herein again.
  • FIG. 17 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. This embodiment is based on the embodiment of Fig. 17,
  • the input module 1601 is further configured to receive the first image sent by another terminal device and a second geographic location associated with the first image;
  • the processing module 1602 is further configured to associate the first image with the second geographic location.
  • the method further includes: a shooting module 1604,
  • the photographing module 1604 is configured to determine, in a second geographic range corresponding to the first geographic location, a third geographic location that meets a preset shooting condition;
  • the output module 1603 is further configured to send the shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that when the vehicle is located in the third geographic location, Start the shooting function of the camera.
  • the input module 1601 is further configured to receive a second image sent by another terminal device and a fourth geographic location associated with the second image, where the second image is the other terminal device at the first An image captured in a second geographic range corresponding to the geographic location, where the fourth geographic location is a shooting location of the second image;
  • the shooting module 1604 is specifically configured to determine, according to the second image sent by another terminal device and a fourth geographic location associated with the second image, the captured information of each of the fourth geographic locations;
  • the photographed information of the fourth geographic location describes a third geographic location that satisfies a preset photographing condition in the fourth geographic location.
  • the apparatus provided in this embodiment may be used to perform the foregoing method embodiments, and the implementation principles and technical effects are similar, and details are not described herein again.
  • FIG. 18 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • the apparatus provided in this embodiment includes an input device 181, a processor 182, an output device 183, a display screen 184, a memory 185, and at least one communication bus 186.
  • Communication bus 186 is used to implement a communication connection between components.
  • Memory 185 may include high speed RAM memory, and may also include non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiments.
  • the processor 182 may be, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), Implemented by a programmable logic device (PLD), a field programmable gate array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic component coupled to the input device 181 and output via a wired or wireless connection Device 183.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • controller a microcontroller
  • microprocessor or other electronic component coupled to the input device 181 and output via a wired or wireless connection Device 183.
  • the input device 181 may include multiple input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, and a transceiver.
  • the device-oriented device interface may be a wired interface for data transmission between the device and the device, or may be a hardware insertion interface (for example, a USB interface, a serial port, etc.) for data transmission between the device and the device.
  • the user-oriented user interface may be, for example, a user-oriented control button, a voice input device for receiving voice input, and a touch-sensing device for receiving a user's touch input (eg, a touch screen with touch sensing function, touch
  • the programmable interface of the software may be, for example, an input for the user to edit or modify the program, such as an input pin interface or an input interface of the chip; optionally, the transceiver may have Radio frequency transceiver chip, baseband processing chip, and transceiver antenna for communication functions.
  • the image processing device may be a device for image processing of a vehicle, for example, may be a device for image processing of a vehicle, a device for image processing of an aircraft, an image for a waterway transporter Processed equipment, etc.
  • the present application provides another embodiment for introduction. Please refer to the following embodiments, which will not be described in detail herein.
  • the input device 181 is configured to acquire a first image corresponding to the current scene
  • the processor 182 is coupled to the input device 181, configured to acquire a first geographic location corresponding to the first image, and associate the first image with the first geographic location;
  • the processor 182 is further configured to add a first mark on the first geographic location of the map, to indicate that the first geographic location is associated with the first image.
  • a display screen 184 is coupled to the processor 182, and the processor 182 is further configured to control the display screen 184 to be displayed at a preset position of the displayed map interface. At least one first image.
  • a display screen 184 is coupled to the processor 182, the processor 182 is further configured to control the first geographic location of the display screen 184 at the displayed map interface The first mark is displayed in position.
  • the input device 181 is further configured to receive a first instruction triggered by the user to operate the first flag;
  • the processor 182 is further configured to: according to the first instruction, control the display screen 184 to display a content display interface on the displayed map interface, where the content display interface is displayed with the first geographic location Associated to One less first image.
  • an output device 183 the output device 183 being coupled to the processor 182;
  • the input device 181 is further configured to receive a second instruction triggered by the user to operate the content display interface
  • the processor 182 is further configured to control, according to the second instruction, the output device 183 to send the first geographic location and at least one first image associated with the first geographic location to a network device, so that The network device performs sharing of at least one first image.
  • the processor 182 is further configured to replace the first mark with a second mark, where the second mark is used to indicate that the first image has been shared.
  • an output device 183 the output device 183 being coupled to the processor 182;
  • the input device 181 is further configured to receive a third instruction that is triggered by a user operating a preset user interface
  • the processor 182 is further configured to, according to the third instruction, control the output device 183 to send, to the first geographic location associated with each of the first images, the acquired first image that meets the preset sharing condition a network device to cause the network device to perform sharing of each of the first images.
  • the input device 181 is further configured to receive a second image sent by the server and a second geographic location associated with the second image;
  • the processor 182 is further configured to add a third mark on the second geographic location of the map for characterizing that the second geographic location is associated with the second image.
  • an output device 183 the output device 183 being coupled to the processor 182;
  • the processor 182 is further configured to determine a shooting function that needs to start the imaging device;
  • the output device 183 is configured to send a fifth instruction to the imaging device to instruct the imaging device to perform shooting;
  • the input device 181 is specifically configured to acquire a first image obtained by the camera device capturing a current scene.
  • the input device 181 is further configured to receive sensing data sent by the sensing device, and the processor 182 is further configured to determine, according to the sensing data, a shooting function that needs to be activated by the imaging device; or
  • the input device 181 is further configured to receive the shooting information sent by the server, where the shooting information includes a geographic location to be photographed, and the processor 182 is further configured to: the geographic location where the vehicle is located is the geographic location to be photographed Determining that it is necessary to activate the photographing function of the image capturing apparatus; or
  • the input device 181 is further configured to receive a voice signal input by a user, and the processor 182 is further configured to determine, according to the voice signal, that a shooting function of the camera device needs to be activated; or
  • the input device 181 is further configured to receive a third image
  • the processor 182 is further configured to perform image analysis on the third image, and determine that the current scene is a preset scene according to the image analysis result, and then determine that the imaging device needs to be activated.
  • Shooting function or
  • the input device 181 is further configured to receive a control signal triggered by the user through the hardware device, and the processor 182 is further configured to determine, according to the control signal, that a shooting function of the imaging device needs to be activated.
  • the device provided in this embodiment can be used to perform the foregoing method embodiments in FIG. 2 to FIG. 10, and the implementation principle and technical effects are similar.
  • FIG. 19 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • the device can include an input device 191, a processor 192, an output device 193, a memory 194, and at least one communication bus 195.
  • Communication bus 195 is used to implement a communication connection between components.
  • Memory 194 may include high speed RAM memory, and may also include non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiments.
  • the input device 191 is configured to receive a first geographic location sent by the terminal device;
  • the processor 192 is coupled to the input device 191 and the output device 193 for determining a first image, where the first image is in a first geographic range corresponding to the first geographic location. The image taken;
  • the output device 193 is configured to send the first image and a second geographic location associated with the first image to the terminal device, where the second geographic location is a shooting location of the first image.
  • the processor 192 is further configured to determine, in a second geographic range corresponding to the first geographic location, a third geographic location that meets a preset shooting condition;
  • the output device 193 is further configured to send the shooting information to the terminal device, where the shooting information includes the third geographic location, and the shooting information is used to indicate that the vehicle needs to be activated when the vehicle is located in the third geographic location. Shooting function of the camera device.
  • the input device 191 is further configured to receive a second image sent by another terminal device and a fourth geographic location associated with the second image, where the second image is the other terminal device at the first An image captured in a second geographic range corresponding to the geographic location, where the fourth geographic location is a shooting location of the second image;
  • the processor 192 is further configured to determine, according to the second image sent by the other terminal device and the fourth geographic location associated with the second image, the captured information of each of the fourth geographic locations, according to each The photographed information of the fourth geographic location describes a third geographic location that satisfies a preset photographing condition in the fourth geographic location.
  • the device provided in this embodiment can be used to perform the foregoing method embodiments in FIG. 11 to FIG. 13 , and the implementation principles and technical effects are similar.
  • FIG. 20 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • Figure 20 is a specific embodiment of the implementation of Figure 18.
  • the processing device of the image may for example be a terminal device.
  • the image processing apparatus of the present embodiment includes a processor 11 and a memory 12.
  • the processor 11 executes the computer program code stored in the memory 12 to implement the image processing method of FIGS. 2 to 10 in the above embodiment.
  • the processor 11 is disposed in the processing component 10.
  • the processing device of the image may further include: a communication component 13, a power component 14, a multimedia component 15, an audio component 16, an input/output interface 17, and a sensor component 18.
  • Processing component 10 typically controls the overall operation of the processing device of the image.
  • Processing component 10 may include one or more processors 11 to execute instructions to perform all or part of the steps of the methods of FIGS. 2-10.
  • processing component 10 can include one or more modules to facilitate interaction between component 10 and other components.
  • processing component 10 can include a multimedia module to facilitate interaction between multimedia component 15 and processing component 10.
  • Power component 14 provides power to various components of the image processing device.
  • Power component 14 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power to processing devices of the image.
  • the multimedia component 15 includes a display screen that provides an output interface between the processing device of the image and the user.
  • the display screen can display the map interface in the above embodiment.
  • the display screen includes a touch panel that can be implemented as a touch screen to receive instructions from a user operating a user interface input.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel.
  • the touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the audio component 16 is configured to output and/or input an audio signal.
  • the audio component 16 includes a microphone (MIC) that is configured to receive an external audio signal, such as the "zebra" described above, when the processing device of the image is in an operational mode, such as a voice recognition mode.
  • the received audio signal may be further stored in memory 12 or transmitted via communication component 13.
  • audio component 16 also includes a speaker for outputting an audio signal.
  • the input/output interface 17 provides an interface between the processing component 10 and the peripheral interface module, which may be a click wheel, a button, or the like. These buttons may include, but are not limited to, a volume button, a start button, and a lock button.
  • Sensor assembly 18 includes one or more sensors for providing various aspects of state assessment for the processing device of the image.
  • sensor component 18 can detect the open/closed state of the processing device of the image, the relative positioning of the components, the presence or absence of contact of the user with the processing device of the image.
  • Sensor assembly 18 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor Assembly 18 may also include acceleration sensors, gyroscope sensors, gravity sensors, and the like.
  • the communication component 13 is configured to facilitate wired or wireless communication between the processing device of the image and other devices.
  • the image processing device can access a wireless network based on a communication standard such as WiFi, 2G or 3G, or a combination thereof.
  • the image processing device may include a SIM card slot for inserting the SIM card so that the image processing device can log into the GPRS network to establish communication with the server via the Internet.
  • the processing device of the image may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA programmable gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • the present application further provides another embodiment, and the present application specifically discloses an apparatus for image processing of a vehicle.
  • the device for image processing of the vehicle may be a vehicle device, an attached device after the vehicle leaves the factory, and the like.
  • the apparatus for image processing of a vehicle may include: an onboard input device, an onboard processor; and optionally, an onboard output device and other additional devices.
  • the airborne in the "airborne input device”, “airborne output device”, and “airborne processor” may be an “vehicle input device” carried on a vehicle
  • “Vehicle output device” and “vehicle processor” may also be “onboard input device”, “onboard output device”, “onboard processor” carried on the aircraft, or may be carried on other types of vehicles.
  • the above apparatus does not limit the meaning of "airborne” in the embodiment of the present application.
  • the onboard input device may be an in-vehicle input device
  • the onboard processor may be an onboard processor
  • the onboard output device may be an onboard output device.
  • the above-described in-vehicle input device may include various input devices depending on the type of vehicle installed, for example, may include a user-oriented in-vehicle user interface, a device-oriented in-vehicle device interface, an in-vehicle programmable interface of software, a transceiver At least one of them.
  • the device-oriented in-vehicle device interface may be a wired interface for data transmission between the device and the device (for example, a connection interface with a driving recorder on a center console of the vehicle), or may be used for the device.
  • a hardware insertion interface for example, a USB interface, a serial port, etc.
  • the user-oriented vehicle user interface may be, for example, a steering wheel control button for a vehicle, for a large vehicle or a small vehicle.
  • the onboard programmable interface of the above software may be, for example, an entry in the vehicle control system that can be edited or modified by the user, such as in a vehicle And the input pin interface or the input interface of the large and small chips;
  • the transceiver may be a radio frequency transceiver chip with a communication function in the vehicle, a baseband processing chip, and a transceiver antenna.
  • the onboard input device is configured to acquire a first image corresponding to the current scene.
  • the in-vehicle input device may be a device transmission interface that communicates with various service sources inside the vehicle, or may have Communication function transceiver.
  • the onboard input device is also operative to receive various commands triggered by the user.
  • the in-vehicle input device may be a steering wheel control button for the vehicle, and a central control for a large vehicle or a small vehicle.
  • the onboard processor can use various application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (depending on the type of vehicle installed). PLD), field programmable gate array (FPGA), central processing unit (CPU), controller, microcontroller, microprocessor or other electronic component implementation and used to perform the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • FPGA field programmable gate array
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other electronic component implementation used to perform the above methods.
  • the onboard processor is coupled to the onboard input device and the onboard output device via an in-vehicle line or wireless connection.
  • the above-described onboard processor can perform the methods in the embodiments corresponding to FIGS. 2 through 10 described above.
  • the above-described onboard output device may be a transceiver that establishes wireless transmission with a user's handheld device or the like, or may be various display devices on the vehicle, depending on the type of vehicle installed.
  • the display device can be various display devices used in the industry, or can be a head up display with a projection function.
  • the airborne output device of this embodiment can perform the method in the embodiment corresponding to FIG. 2 to FIG. 10 described above.
  • FIG. 21 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present disclosure.
  • Figure 21 is a specific embodiment of the implementation of Figure 19.
  • the processing device of the image may be, for example, a server.
  • the processing device of the image provided by this embodiment includes a processor and a memory 22.
  • the processor is disposed in the processing component 20.
  • the processor executes the computer program code stored in the memory 22 to implement the image processing method shown in FIGS. 11 to 13 in the above embodiment.
  • the processing device of the image may further include: a power component 23, a network interface 24, and an input/output interface 25.
  • processing component 20 which further includes one or more processors, and memory resources represented by memory 22 for storing instructions executable by processing component 20, such as an application.
  • processors and memory resources represented by memory 22 for storing instructions executable by processing component 20, such as an application.
  • Stored in memory 22 A stored application may include one or more modules each corresponding to a set of instructions.
  • the processing component 20 is configured to execute instructions to perform the processing method of the image in the above-described embodiments of FIGS. 11 through 13.
  • the image processing device may further include a power supply component 23 configured to perform power management of the processing device of the image, a wired or wireless network interface 24 configured to connect the image processing device to the network, and an input/output (I/O) ) Interface 25.
  • the image processing device can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • FIG. 22 is a schematic structural diagram of a user interface system according to an embodiment of the present application. As shown in Figure 22, it includes:
  • a display component 2201 configured to display a map interface
  • the processor 2202 is configured to trigger the display component 2202 to display a first mark on a first geographic location of the map interface, for indicating that the first geographic location is associated with the first image.
  • the schematic diagram of the state of the map interface provided by this embodiment may be as shown in FIG. 3 above, and the first mark is displayed on the first geographic location of the map interface.
  • the processor 2202 is further configured to trigger the display component 2201 to display a first image associated with the first geographic location at a preset location of the map interface. Specifically, as shown in FIG. 5 above, a first mark is displayed on the map interface, and a first image is also displayed in a lower right corner of the map interface.
  • the processor 2202 is further configured to trigger the display component 2201 to display a second mark on a second geographic location of the map interface, where the second mark is used to represent the second geographic location association
  • the second image has been shared. Specifically, as shown in the user interface diagram on the right side of FIG. 7 , a second mark is also displayed on the second geographic location of the map interface.
  • the processor 2202 is further configured to trigger the display component 2201 to display a third mark on a third geographic location of the map interface, to indicate that the third geographic location is associated with other users.
  • the third image Specifically, as shown in FIG. 9 above, a third mark is also displayed on the third geographic location of the map interface.
  • the processor 2202 is further configured to trigger the display component 2201 to display a content display interface 2201 on the map interface, where the content display interface 2201 is displayed with each of the tags. Corresponding image. Specifically, it can be as shown in FIG. 6 and FIG. 10 described above.
  • the processor 2202 is further configured to trigger the display component 2201 to display a user interface on the content display interface 2201, where the user interface is used by the user to trigger various instructions.
  • the “Dropstick” icon shown in FIG. 7 above can be used.
  • the user interface system provided in this embodiment can display the captured image intuitively to the user image on the map interface.
  • the image can be shared with other users, and the operation flow is simple, which is convenient for the user to quickly share the captured image to other users in the driving environment, thereby improving the user's driving. safety.
  • the application also provides an in-vehicle internet operating system.
  • the in-vehicle Internet operating system can manage and control the hardware of the device for image processing shown in FIG. 18 or FIG. 20 or the hardware and device of the device for image processing of the vehicle according to the present application.
  • the computer program for applying for the software resources involved is the system software directly running on the above device.
  • the operating system is the interface between the user and the above devices, and is also the interface between the hardware and other software.
  • the in-vehicle Internet operating system provided by the present application can interact with other modules or functional devices on the vehicle to control the functions of the corresponding modules or functional devices.
  • the vehicle in the above embodiment is a vehicle
  • the image processing device is an in-vehicle terminal device.
  • the vehicle is no longer independent of the communication network.
  • the vehicle can be connected to each other through the vehicle terminal device and the server to form a network, thereby forming an in-vehicle Internet.
  • the in-vehicle Internet system can provide voice communication services, location services, navigation services, mobile internet access, vehicle emergency rescue, vehicle data and management services, in-vehicle entertainment services, and the like.
  • FIG. 23 is a schematic structural diagram of an in-vehicle Internet operating system according to an embodiment of the present application.
  • the operating system provided by the present application includes: an image control unit 231 and an associated control unit 232.
  • the image control unit 231 controls the in-vehicle input device to acquire the first image corresponding to the current scene
  • the association control unit 232 is configured to acquire a first geographic location corresponding to the first image, and obtain a map with a first identifier added to the first geographic location, where the first geographic location is associated with the first image, where The map to which the first mark is added is obtained by adding a first mark on the first geographical position of the original map.
  • the in-vehicle input device in this embodiment may include the input device in the foregoing embodiment, and the image control unit 231 may control the in-vehicle input device to acquire the first image corresponding to the current scene.
  • the association control unit 232 can add a first indicia on the first geographic location of the original map by the image processing system.
  • the image processing system may be a function implemented by an operating system, or the image processing system may be a function implemented by a processor in the above embodiment.
  • the in-vehicle Internet operating system may control the corresponding components to perform the above-described FIG. 2 to FIG. 10 by using the image control unit 231 and the associated control unit 232 described above, or in combination with other units on the basis of the above two units. Methods.
  • the present application also provides a processor readable storage medium having stored therein program instructions for causing a processor of an image processing device to perform the image processing method of the above-described embodiments of FIGS. 2 to 10.
  • the present application also provides a processor readable storage medium having stored therein program instructions for causing a processor of an image processing apparatus to perform the processing method of the image in the above-described embodiments of FIGS. 11 to 13 .
  • the readable storage medium described above can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Instructional Devices (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种图像的处理方法、装置、设备及用户界面系统。该方法包括:获取当前场景对应的第一图像(201);获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联(202);在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像(203)。使得用户在确知一地理位置被拍摄时,操作复杂度低。

Description

图像的处理方法、装置、设备及用户界面系统
本申请要求2016年04月21日递交的申请号为201610251255.4、发明名称为“图像的处理方法、装置、设备及用户界面系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术,尤其涉及一种图像的处理方法、装置、设备及用户界面系统。
背景技术
行车记录仪是记录车辆行驶途中影像及声音等相关信息的仪器,它不仅能够为交通事故提供证据,还能够记录用户旅途中的风景等。因此,越来越多的用户在车辆行驶过程中使用行车记录仪。
在现有技术中,行车记录仪一般采用循环拍摄和/或间隔拍摄的方式进行拍摄,因此,用户并不知道行车记录仪对哪些地理位置进行了拍摄并存储。当用户在路途中的一地理位置发现了趣事时,用于需要获知行车记录仪是否对该地理位置进行了拍摄,用户需要对行车记录仪所存储的图像进行画面回播,然后用户通过浏览回播的画面来确定行车记录仪是否在该地理位置进行了拍摄。具体地,用户需要先开启行车记录仪,然后启用行车记录仪的回播功能,并对回播的画面进行快进操作或后退操作,以确定行车记录仪是否对该地理位置进行了拍摄。
然而,用户在获知行车记录仪是否对一地理位置进行了拍摄时,需要对行车记录仪进行繁琐的操作,导致用户操作复杂度较高。
发明内容
本申请提供一种图像的处理方法、装置、设备及用户界面系统,以解决用户在确知一地理位置被拍摄时,用户操作复杂度高的问题。
第一方面,本申请提供一种图像的处理方法,包括:
获取当前场景对应的第一图像;
获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
作为一种可实现的方式,所述获取当前场景对应的第一图像之后,还包括:
在显示屏所显示的地图界面的预设位置处显示至少一个第一图像。
所述在地图的所述第一地理位置上添加第一标记之后,还包括:
在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记。
作为一种可实现的方式,在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记之后,还包括:
接收用户操作所述第一标记触发的第一指令;
根据所述第一指令,在所述显示屏所显示的地图界面上显示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至少一个第一图像。
本申请通过在预设位置处显示第一图像,或者在收到第一指令后,在内容显示界面上显示第一图像,用户不需要进行繁琐的操作,就可以快速查看第一地理位置关联的至少一个第一图像。进一步地,由于用户可以及时查看第一图像,那么,对于用户不需要的第一图像,用户还可以进行删除操作,从而可以节省存储空间。
作为一种可实现的方式,在所述显示屏所显示的地图界面上显示一内容显示界面之后,还包括:
接收所述用户操作所述内容显示界面触发的第二指令;
根据所述第二指令,将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
本申请通过接收用户操作内容显示界面触发的第二指令,即用户只需要与显示屏进行简单的互动,就可以将第一图像分享给其它用户,降低了用户操作的复杂度。
作为一种可实现的方式,所述将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备之后,还包括:
将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
作为一种可实现的方式,所述方法还包括:
接收用户操作预设用户接口触发的第三指令;
根据所述第三指令,将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
作为一种可实现的方式,所述满足预设分享条件的第一图像,包括:
显示屏所显示的地图界面上已显示的第一标记所关联的至少一个第一图像;或者
拍摄位置位于第一地理范围的至少一个第一图像。
作为一种可实现的方式,所述方法还包括:
接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
作为一种可实现的方式,所述接收服务器发送的第二图像和与所述第二图像关联的第二地理位置之前,还包括:
向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理位置;
对应地,所述第二图像是其它终端设备在所述第二地理位置拍摄的图像;
或者
向所述服务器上报地理位置信息,所述地理位置信息包括:第三地理位置,所述第三地理位置用于使所述服务器确定第三地理位置对应的第二地理范围;
对应地,所述第二图像是其它终端设备在所述第三地理位置对应的第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置;
或者
向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理范围;
对应地,所述第二图像是其它终端设备在所述第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置
作为一种可实现的方式,在所述地图的所述第二地理位置上添加第三标记之后,还包括:
接收用户操作已显示的所述第三标记触发的第四指令;
根据所述第四指令,在所述显示屏上显示一内容显示界面,所述内容显示界面中显示有与所述第二地理位置关联的至少一个第二图像。
本申请通过接收服务器发送的第二图像和与第二图像关联的第二地理位置;在地图的第二地理位置上添加第三标记,用于表征第二地理位置关联有第二图像,使得用户可以获知第二地理位置有哪些有趣的地方,为用户的出行和旅游提供了参考,使得用户可以在一次出行中,不错过旅途中风景和趣事等。
作为一种可实现的方式,所述获取当前场景对应的第一图像之前,还包括:
确定需要启动摄像设备的拍摄功能;
向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
所述获取当前场景对应的第一图像,包括:
获取所述摄像设备对当前场景进行拍摄得到的第一图像。
作为一种可实现的方式,所述确定需要启动所述摄像设备的拍摄功能,包括:
接收传感设备发送的传感数据,根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,在交通工具所处的地理位置为所述待拍摄的地理位置时,确定需要启动所述摄像设备的拍摄功能;或者
接收用户输入的语音信号,根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
接收第三图像,对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
接收用户通过硬件设备触发的控制信号,根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
本申请通过在传感数据异常时,确定需要启动摄像设备的拍摄功能,可以保证在交通工具受到碰撞或剧烈震动时,摄像设备可以及时记录当前场景,为交通工具遇到的交通事故等事件提供证据。
本实施例通过语音的方式确定启动摄像设备的拍摄功能,用户在获取自己所需的图像时,不需要用手进行操作,解放了用户的双手,使得用户可以专心开车,提高了驾驶的安全性。
本申请通过从服务器获取拍摄位置,或者通过图像分析获取拍摄位置,可以主动为用户记录沿途风景,不干扰车主的注意力,用户不需要对沿途风景执行拍摄操作,就可以获取沿途有价值的瞬间和风景。
第二方面,本申请提供一种图像的处理方法,包括:
接收终端设备发送的第一地理位置;
确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
本申请通过确定其它终端设备在第一地理位置对应的第一地理范围内拍摄的第一图像,并将第一图像以及该第一图像的拍摄位置第二地理位置发送给终端设备,使得使用该终端设备的用户可以方便快捷的获知当前地理位置周围用户分享的第一图像,以及第一图像关联的第二地理位置。同时该第一图像和第二地理位置可以及时为用户的出行或游玩提供参考,提高了用户出行的趣味性。
作为一种可实现的方式,所述确定第一图像之前,还包括:
接收其它终端设备发送的所述第一图像以及与所述第一图像关联的第二地理位置;
将所述第一图像与所述第二地理位置进行关联。
作为一种可实现的方式,所述方法还包括:
在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置;
向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当交通工具位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
本实施例通过在第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置,并向终端设备发送具有拍摄价值的包括该第三地理位置的拍摄信息,使得终端设备可以实现自动拍摄,不错过美好瞬间。
作为一种可实现的方式,在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置之前,还包括:
接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置,包括:
根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息;
根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
作为一种可实现的方式,所述被拍摄信息包括:所述被拍摄信息包括:各所述第四地理位置在预设时间段内被拍摄的频次;
所述预设拍摄条件为被拍摄的频率大于预设值,或者被拍摄的频率在预设排名之前。
本实施例通过在第一地理位置对应的第二地理范围内,对其它终端设备拍摄的第二图像以及关联的第四地理位置进行分析,提取被拍摄数量多的第三地理位置,可以获取比较有价值的拍摄位置,并向终端设备发送具有拍摄价值的包括该第三地理位置的拍摄信息,使得终端设备可以实现自动拍摄,不错过美好瞬间。
第三方面,本申请提供一种图像的处理装置,该装置的功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。具体地,该装置包括:
输入模块,用于获取当前场景对应的第一图像;
关联模块,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
标记模块,用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
第四方面,本申请提供一种图像的处理装置,该装置的功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。具体地,该装置包括:
输入模块,用于接收终端设备发送的第一地理位置;
处理模块,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
输出模块,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
第五方面,本申请提供一种图像的处理设备,包括:输入设备和处理器;
所述输入设备,用于获取当前场景对应的第一图像;
所述处理器,耦合到所述输入设备,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
所述处理器,还用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
第六方面,本申请提供一种图像的处理设备,包括:输入设备、处理器和输出设备;
所述输入设备,用于接收终端设备发送的第一地理位置;
所述处理器,耦合到所述输入设备和所述输出设备,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
所述输出设备,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
第七方面,本申请提供一种用于交通工具的图像处理的设备,包括:机载输入设备和机载处理器;
所述机载输入设备,用于获取当前场景对应的第一图像;
所述机载处理器,耦合到所述机载输入设备,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
所述机载处理器,还用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
第八方面,本申请提供一种用户界面系统,包括:
显示组件,用于显示地图界面;
处理器,用于触发所述显示组件在地图界面的第一地理位置上显示第一标记,用于表征第一地理位置关联有所述第一图像。
第九方面,本申请提供一种车载互联网操作系统,包括:
图像控制单元,控制车载输入设备获取当前场景对应的第一图像;
关联控制单元,获取所述第一图像对应的第一地理位置,得到在所述第一地理位置上添加有第一标记的地图,用于表征第一地理位置关联有第一图像,其中,所述添加有第一标记的地图是在原始地图的第一地理位置上添加第一标记得到的。
本申请提供的图像的处理方法、装置、设备及用户界面系统,在获取到当前场景对应的第一图像后,获取第一图像对应的第一地理位置,即确定了该第一图像的具体拍摄位置。然后将第一图像与第一地理位置进行关联,以便用户后续在查看该第一地理位置时,可以快速查看该第一地理位置关联的第一图像。在第一图像和第一地理位置建立关联关系后,在地图的第一地理位置上添加第一标记,由于该第一地理位置关联有当前场景对应的第一图像,使得用户可以通过观看该第一标记来直接获知摄像设备对第一地理位置进行了拍摄,避免了用户进行繁琐操作,降低了用户操作的复杂度。而且,由于本申请提供了具体的拍摄位置,因此不需要用户根据图像中的建筑物或道路进行识别来确定具体的拍摄位置,提高了用户确定该第一地理位置的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有 技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请的一种可选的组网方式的示意图;
图2为本申请一实施例提供的图像的处理方法的流程示意图;
图3为本申请一实施例提供的用户界面状态示意图;
图4为本申请一实施例提供的用户界面状态示意图;
图5为本申请一实施例提供的用户界面状态示意图;
图6为本申请一实施例提供的用户界面状态变化示意图;
图7为本申请一实施例提供的用户界面状态变化示意图;
图8为本申请一实施例提供的用户界面状态变化示意图;
图9为本申请一实施例提供的用户界面状态示意图;
图10为本申请一实施例提供的用户界面状态变化示意图;
图11为本申请一实施例提供的图像的处理方法的信令流程示意图;
图12为本申请一实施例提供的图像的处理方法的信令流程图;
图13为本申请一实施例提供的图像的处理方法的信令流程图;
图14为本申请一实施例提供的图像的处理装置结构示意图;
图15为本申请一实施例提供的图像的处理装置结构示意图;
图16为本申请一实施例提供的图像的处理装置结构示意图;
图17为本申请一实施例提供的图像的处理装置结构示意图;
图18为本申请一实施例提供的图像的处理设备的硬件结构示意图;
图19为本申请一实施例提供的图像的处理设备的硬件结构示意图;
图20为本申请一实施例提供的图像的处理设备的硬件结构示意图;
图21为本申请一实施例提供的图像的处理设备的硬件结构示意图;
图22为本申请一实施例提供的用户界面系统的结构示意;
图23为本申请一实施例提供的车载互联网操作系统的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施 例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
本申请提供一种对图像进行处理的方法,该方法可以应用到交通工具行驶领域,本申请实施例所涉及的交通工具,可以是任意的车辆、还可以是其他具有相应的控制功能的交通工具。以车辆为例,该车辆可以为单一的油路车辆、还可以是单一的汽路车辆、还可以是油汽结合的车辆、还可以是助力的电动车辆,本申请实施例对车辆的类型并不做限定,该车辆具有相应的车载系统。
具体地,在交通工具行驶过程中,可控制摄像设备及时进行拍摄,并获取摄像设备拍摄的图像。在获取到该图像后,将该图像以及该图像被拍摄的地理位置进行关联存储。然后在地图界面上的该地理位置上添加标记,使得用户可以不进行任何操作,即可直观快速的通过该标记获知摄像设备对该地理位置进行了拍摄。可选地,显示屏所显示的地图界面上还可以显示与该地理位置关联的图像,以使用户可以观看摄像设备在该地理位置拍摄的图像。进一步地,在用户观看该图像后,想要将该图像分享给其它用户时,还可获取用户操作显示屏触发的指令,根据该指令将该图像发送给网络设备,从而使得该图像可以快速便捷的分享给其它用户。
本实施例的执行主体可以为图像的处理装置,该装置可以通过硬件实现,也可以通过硬件执行相应的软件实现,还可以被实现在交通工具或终端设备的基础架构中。当该装置执行如下图2至图10所示的实施例时,该装置可以实现在终端设备的基础架构中,该终端设备包括移动终端、车载设备等。该移动终端例如可以是手机、平板、该车载设备例如可以是行车记录仪、车机、中控台、导航设备等。当该装置执行如下图11至图13所示的实施例时,该装置可以被实现在服务器的基础架构中。
下面以交通工具为车辆,该图像的处理装置被实现到车机或服务器为例,即以车机或服务器为执行主体,对本申请的组网方式以及具体实现方式进行详细说明。
图1为本申请的一种可选的组网方式的示意图。本申请提供的对图像进行处理的方法可通过该组网实现。如图1所示,该组网包括:车机101、摄像设备102以及定位模块103。
其中,车机101指的是安装在车里面的车载信息娱乐产品。车机101大多安装在车的中控台里面,车机101的主机可以和显示屏一体设置,也可以与显示屏分离设置。车机101在功能上能够实现人与车,车与外界的信息通讯。车机的显示屏可以显示导航路径、行驶路径等。
摄像设备102可以为设置在车的任意位置的摄像头,也可以为行车记录仪,也可以为具有摄像功能的手机、平板等终端设备,即该摄像设备102为任意的具有摄像功能的设备。该摄像设备102在接收到车机101发送的用于指示拍摄的指令后,对当前场景进行拍摄,并将拍摄得到的第一图像发送给车机101。
定位模块103可以为全球定位系统(Global Positioning System,简称GPS),也可以为北斗卫星导航系统(BeiDou Navigation Satellite System,简称BDS)等。该定位模块103可以为车辆自带的,也可以由其它外部设备来提供。该定位模块103用于向车机101提供位置信息。
车机101在获取到摄像设备102拍摄的第一图像后,从定位模块103获取当前场景所对应的第一地理位置。该车机101结合该第一地理位置,对该第一图像进行处理。例如,将第一图像与第一地理位置进行关联,然后在地图的第一地理位置上添加第一标记,用于表征第一地理位置关联有第一图像。对于车机101对第一图像进行的其它处理过程,在下述实施例中将进行详细说明。
图1所示的组网中还可以进一步包括传感设备104。传感设备104可以实时向车机101发送传感数据。车机101可以根据传感数据确定需要启动摄像设备的拍摄功能。本领域技术人员可以理解,上述车机启动拍摄功能的方式仅仅是车机101在确定需要启动摄像设备的拍摄功能时的一种可行的实现方式,对于其它可行的实现方式,在下述实施例中将进行详细说明。
在上述实施例的基础上,图1所示的组网中还可以进一步包括服务器105。该服务器105可以接收车机101发送的第一图像,并将该第一图像分享给其它用户。同理,该服务器105还可以将其它用户分享的第二图像发送给车机101。
本领域技术人员可以理解,该组网仅为示例性的组网。该组网中的实体设备还可以由其他设备来替代。例如,该传感设备104还可以为其它检测设备,只要该检测设备能够向车机101发送可以表征车辆受到碰撞或车辆猛烈震动的数据即可。对于该组网的可能的实现方式,本实施例此处不再赘述。下面以图1所示的组网为例,对本申请提供的图像的处理方法进行详细说明。
图2为本申请一实施例提供的图像的处理方法的流程示意图。如图2所示,该流程包括:
步骤201、获取当前场景对应的第一图像。
车机获取摄像设备对当前场景进行拍摄得到的第一图像。该摄像设备的实现方式可 参照图1所示实施例。该第一图像包括照片和/或视频。可选地,用户可预先设置该第一图像所包括的具体内容,例如,用户预设先设置摄像设备同时拍摄照片和视频。
在一个可行的实施例中,车机可以按照预设周期获取该第一图像。具体地,车机可以获取用户输入的预设周期,然后车机向摄像设备发送该预设周期,摄像设备按照该预设周期向车机发送在一个周期内对当前场景进行拍摄的第一图像。例如,预设周期为5分钟,则摄像设备可以每5分钟向车机发送过去5分钟内摄像设备拍摄的第一图像。
在另一个可行的实施例中,车机在确定需要启动摄像设备的拍摄功能后,车机向摄像设备发送第五指令,以指示摄像设备进行拍摄。其中,该第五指令用于指示摄像设备进行拍摄,下述实施例中将该第五指令称为拍摄指令。然后车机接收摄像设备对当前场景进行拍摄得到的第一图像。具体地,车机向摄像设备发送拍摄指令,该拍摄指令中包括拍摄的方式,例如拍摄照片或拍摄视频,或者同时拍摄照片和视频。当该拍摄的方式为拍摄视频时,该拍摄指令中还可以包括拍摄时长。可选地,该拍摄时长也可以预先设置。摄像设备根据该拍摄指令对当前场景进行拍摄,并将拍摄得到的第一图像发送给车机。本领域技术人员可以理解,本实施例中的拍摄功能,具体可以为摄像设备的抓拍功能,对应地,该第五指令用于指示摄像设备进行抓拍,摄像设备在接收到该第五指令后,对当前场景进行抓拍。
需要说明的是,车机在没有向摄像设备发送拍摄指令时,摄像设备可以按照自身设定的模式来进行常规拍摄,并根据自身设定来对第一图像进行存储。
在本实施例中,确定需要启动摄像设备的拍摄功能包括以下可行的实现方式。
一种可行的实现方式,接收传感设备发送的传感数据,根据传感数据确定需要启动摄像设备的拍摄功能。
具体地,该传感设备可以为加速度传感器或者重力传感器等。可选地,该传感设备可以为车辆内置的传感设备,车辆内置的传感设备相对于其它方式设置的车辆传感设备,具有更高的灵敏度,可以提高检测事件的真实性。
车机可以实时获取加速度传感器或者重力传感器发送的传感数据,并对该传感数据进行监控,当车机发现该传感数据发生异常时,确定车辆发生了突发事件,此时,确定需要启动摄像设备的拍摄功能。
本实施例在传感数据异常时,确定需要启动摄像设备的拍摄功能,可以保证在交通工具受到碰撞或剧烈震动时,摄像设备可以及时记录当前场景,为交通工具遇到的交通事故等事件提供证据。
又一种可行的实现方式,接收用户输入的语音信号,根据语音信号确定需要启动摄像设备的拍摄功能。
具体地,启动摄像设备的拍摄功能还可由用户来触发。例如,语音的唤醒词为“斑马”,当车机接收到用户输入的“斑马快拍”的语音信号时,则根据该语音信号,确定需要启动摄像设备的拍摄功能。
本实施例通过语音的方式确定启动摄像设备的拍摄功能,用户在获取自己所需的图像时,不需要用手进行操作,解放了用户的双手,使得用户可以专心开车,提高了驾驶的安全性。
另一种可行的实现方式,车机接收服务器发送的拍摄信息,拍摄信息包括待拍摄的地理位置,在交通工具所处的地理位置为待拍摄的地理位置时,确定需要启动摄像设备的拍摄功能。
具体地,车机在与服务器进行交互的过程中,车机可以实时向服务器发送车辆所处的地理位置,服务器确定该车辆所处的地理位置附近是否存在待拍摄的地理位置,如果有,则将该待拍摄位置发送给车机。例如,服务器可以通过互联网获取风景优美的地理位置,当车辆所处的地理位置附近存在风景优美的地理位置时,则服务器确定该车辆所处的地理位置附近存在待拍摄的地理位置(风景优美的地理位置),服务器向车机发送拍摄信息,该拍摄信息中包括待拍摄的地理位置。车机实时获取车辆所处的地理位置,当车辆所处的地理位置为待拍摄的地理位置时,确定需要启动摄像设备的拍摄功能。
再一种可行的实现方式,接收摄像设备发送的第三图像,对第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能。
具体地,车机对摄像设备发送第三图像的方式没有任何限制。车机在获取到该第三图像后,对第三图像进行色彩分析,得到第三图像的色彩信息,该色彩信息包括颜色的类型以及各颜色所占的面积比值,根据色彩信息确定当前场景是否为预设场景,若是,则确定需要启动摄像设备的拍摄功能。例如,第三图像对应的颜色的类型包括红色、绿色、黄色、棕色、灰色。红色所占面积为25%,绿色所占面积为30%,黄色所占面积为25%,棕色所占面积为10%,灰色所占面积为10%。车机根据该色彩信息,确定当前场景为风景场景,为预设场景,则确定需要启动摄像设备的拍摄功能。本实施例中的预设场景并不限于风景场景,还可以为满足一定建筑特色的建筑场景等,对于预设场景的具体实现方式,本实施例此处不做特别限制。
上述两种可行的实现方式,通过从服务器获取拍摄位置,或者通过图像分析获取拍 摄位置,可以主动为用户记录沿途风景,不干扰车主的注意力,用户不需要对沿途风景执行拍摄操作,就可以获取沿途有价值的瞬间和风景。
又一种可行的实现方式,接收用户通过硬件设备触发的控制信号,根据控制信号确定需要启动所述摄像设备的拍摄功能。
具体地,该硬件设备可以为交通工具的方向盘、交通工具的触摸屏、交通工具的中控台等。用户操作该些硬件设备,从而触发控制信号。以该硬件设备为方向盘为例,车机与交通工具的方向盘可以通过有线连接或者无线连接。方向盘上设置有预设按钮,当用户按压该预设按钮时,车机接收用户通过方向盘触发的控制信号,然后根据该控制信号,确定需要启动所述摄像设备的拍摄功能。在交通工具的行驶过程中,用户需要一直操控方向盘,因此,通过方向盘触发控制信号,用户操作方便快捷。
步骤202、获取第一图像对应的第一地理位置,并将第一图像与第一地理位置进行关联。
在车机获取到当前场景拍摄的第一图像之后,车机获取第一图像所对应的第一地理位置,即获取第一图像的拍摄位置。例如,车机向车辆中设置的定位模块发送位置获取请求,车机接收定位模块返回的第一图像所对应的第一地理位置。或者,定位模块实时向车机提供当前场景的第一地理位置,车机通过该第一地理位置在显示屏上实时显示行驶路径,车机可以根据当前的行驶路径,来获取第一图像对应的第一地理位置。本领域技术人员可以理解,车机还可以通过其它方式获取第一图像对应的第一地理位置。例如,车机可以与其它终端设备进行交互,获取第一图像对应的第一地理位置。本实施例对第一地理位置的具体获取方式,不做特别限制。
在车机获取到第一地理位置之后,车机将第一图像与第一地理位置进行关联。在具体实现过程中,可将该第一图像和第一地理位置关联存储到内存中,然后建立第一图像和第一地理位置的映射关系,也可以在存储第一图像时,在第一图像的属性中增加位置属性,该位置属性包括第一地理位置。上述仅为示意性的列出了第一图像与第一地理位置进行关联的实现方式,对于其它的实现方式,本实施例此处不做特别限制。本实施例通过将第一图像与第一地理位置进行关联,后续用户可以快速查看在该第一地理位置拍摄的第一图像。
步骤203、在地图的第一地理位置上添加第一标记,用于表征第一地理位置关联有第一图像。
车机在获取到第一地理位置之后,在地图的第一地理位置上添加第一标记。由于第 一图像与第一地理位置具有关联关系,因此,在第一地理位置上添加第一标记的操作,可以表征第一地理位置关联有第一图像。
本领域技术人员可以理解,在地图的第一地理位置上添加第一标记之后,可以直接在显示屏上的地图界面上显示该第一标记,也可以在用户浏览地图界面时,再显示该第一标记。
图3为本申请一实施例提供的用户界面状态示意图。如图3所示,在显示屏所显示的地图界面的第一地理位置上显示第一标记即对第一地理位置进行标记。该第一标记可以表征该第一地理位置已被拍摄,或者表征第一地理位置关联有第一图像等。进一步地,在下述实施例中,该第一标记还可以用于显示第一图像。
可选地,在显示屏所显示的地图界面上,还可以实时显示车辆的行驶路径。车机在获取到第一地理位置之后,在显示屏所显示的交通工具的行驶路径对应的该地图界面上,在第一地理位置上显示第一标记。图4为本申请一实施例提供的用户界面状态示意图。如图4所示,车机在车辆的行驶路径上的箭头所处的位置附近,在第一地理位置上显示第一标记
Figure PCTCN2017080545-appb-000002
本领域技术人员可以理解,在该行驶路径的其它位置,通过
Figure PCTCN2017080545-appb-000003
进行标记的地理位置,是车机在此刻之前在第一地理位置上添加的标记。用户在看到该第一标记时,即可获知摄像设备对哪些地理位置进行了拍摄,还可获知车机对该些地理位置对应的第一图像进行了存储,后续用户可以随时查看该第一图像。
本申请提供的图像的处理方法,在获取到当前场景对应的第一图像后,获取第一图像对应的第一地理位置,即确定了该第一图像的具体拍摄位置。然后将第一图像与第一地理位置进行关联,以便用户后续在查看该第一地理位置时,可以快速查看该第一地理位置关联的第一图像。在第一图像和第一地理位置建立关联关系后,在地图的第一地理位置上添加第一标记,由于该第一地理位置关联有当前场景对应的第一图像,使得用户可以通过观看该第一标记来直接获知摄像设备对第一地理位置进行了拍摄,避免了用户进行繁琐操作,降低了用户操作的复杂度。而且,由于本申请提供了具体的拍摄位置,因此不需要用户根据图像中的建筑物或道路进行识别来确定具体的拍摄位置,提高了用户确定该第一地理位置的效率。
在上述实施例的基础上,本申请还对第一图像进行显示,具体可通过如下可行的实现方式来实现。
一种可行的实现方式,在获取摄像设备对当前场景进行拍摄得到的第一图像之后, 在显示屏所显示的地图界面的预设位置处显示至少一个第一图像。具体实现过程参见图5。
图5为本申请一实施例提供的用户界面状态示意图。如图5所示,在该地图界面上,显示有第一标记,同时在该地图界面的右下角还显示至少一个第一图像。本领域技术人员可以理解,本实施例可以在地图界面上显示所有的第一图像,也可以在地图界面显示部分第一图像,并在地图界面上标明第一图像的总个数,以及当前显示的个数。在当前显示的个数为部分第一图像时,用户可以通过点击或滑动显示屏等方式,来获取其它第一图像。本实施例中的预设位置,不仅可以为右下角,还可以为左下角、左上角、右上角等位置。可选地,该预设位置还可以随着车辆的行驶路径而改变,即该第一图像不遮盖车辆的行驶路径。
另一种可行的实现方式,在显示屏所显示的地图界面的第一地理位置上显示所述第一标记之后,接收用户操作第一标记触发的第一指令;根据第一指令,在显示屏所显示的地图界面上显示一内容显示界面,内容显示界面中显示有与第一地理位置关联的至少一个第一图像。其中,第一指令用于指示在地图界面上显示一内容显示界面。第一指令例如可以是用于查看第一图片的查看指令。具体实现过程参见图6。本领域技术人员可以理解,本实施例可以在内容显示界面显示所有的第一图像,也可以在内容显示界面上显示部分第一图像,并在内容显示界面上标明第一图像的总个数,以及当前显示的个数。在当前显示的个数为部分第一图像时,用户可以通过点击或滑动显示屏等方式,来获取其它第一图像。
图6为本申请一实施例提供的用户界面状态变化示意图。如图6所示,在用户想要查看该第一地理位置关联的第一图像时,用户对该第一标记
Figure PCTCN2017080545-appb-000004
进行操作以触发第一指令,该操作可以为点击该第一标记,也可以为长按该第一标记,或者滑动该第一标记等。车机接收用户操作第一标记触发的第一指令。然后车机在显示屏所显示的地图界面上的中间位置显示一内容显示界面,内容显示界面中显示有与第一地理位置关联的至少一个第一图像。需要说明的是,该内容显示界面可以位于地图界面的中间位置,也可以位于地图界面的其它位置,本实施例对内容显示界面所处的具体位置不做特别限定。可选地,该内容显示界面上还可以显示分享的第一图像所包括的照片和视频的总数量,以及第一地理位置、拍摄时间等。
上述两种可行的实现方式,使得用户不需要进行繁琐的操作,就可以快速查看第一 地理位置关联的第一图像。进一步地,由于用户可以及时查看该第一图像,那么,对于用户不需要的第一图像,用户还可以进行删除操作,从而可以节省存储空间。
在上述实施例的基础上,在本申请中,还可以与网络设备进行交互,实现分享功能。该网络设备可以为移动网络设备,例如车载设备、移动终端等,也可以为服务器、计算机等。即实现将该第一图像分享给其它网络设备,同时获取其它网络设备分享给该车机的第二图像。下面采用具体的实施例,对本申请的分享过程进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
首先,来说明如何将该第一图像分享给网络设备,即从车机的角度,来说明车机将第一图像分享给其它网络设备。
一种可行的实现方式,接收用户操作内容显示界面触发的第二指令;
根据第二指令,将第一地理位置以及与第一地理位置关联的至少一个第一图像发送给网络设备,以使网络设备进行至少一个第一图像的分享。第二指令用于指示将第一地理位置以及与第一地理位置关联的至少一个第一图像发送给网络设备。第二指令例如可以是用于分享第一图像的分享指令。本领域技术人员可以理解,第一地理位置关联的第一图像是多个,用户可以在内容显示界面上选择将其中的至少一个第一图像发送给网络设备。
具体地,车机可以通过多种方式接收用户在内容显示界面上触发的第二指令。例如,请继续参照图6最终显示的地图界面,用户可以单击或双击任一第一图像,然后车机接收用户在内容显示界面上通过单击或双击触发的第二指令,再或者,为了防止用户误点击,在用户单击或双击任一第一图像时,在内容显示界面上还可以显示用于提示的窗口(未示出),在该窗口上显示“是”、“否”对话框,当用户点击“是”时,车机接收到用户在内容显示界面上触发的第二指令。
本申请还提供其他实现方式,例如,图7为本申请一实施例提供的用户界面状态变化示意图。如图7所示,本实施例在内容显示界面上还显示有用户接口,该用户接口例如可以是窗口、图标、对话框、悬浮框、按钮控件等。用户在对该用户接口进行操作时,可触发第二指令。例如,在图7的内容显示界面上,设置有“丢图钉”图标,在用户点击“丢图钉”图标之后,车机接收用户在内容显示界面上触发的第二指令。
在车机接收到用户的第二指令后,将第一地理位置以及与第一地理位置关联的第一图像发送给网络设备。本领域技术人员可以理解,在用户点击“丢图钉”之后,可以将所有的第一图像发送给网络设备,或者,用户先选中需要分享的至少一个第一图像,然 后在用户点击“丢图钉”之后,可以将用户选中的至少一个第一图像发送给网络设备。
本申请通过获取用户在内容显示界面上触发的第二指令,即用户只需要与显示屏进行简单的互动,就可以将第一图像分享给其它用户,降低了用户操作的复杂度。
可选地,在本实施例中,当第一图像分享给其它用户时,将第一标记
Figure PCTCN2017080545-appb-000005
替换为第二标记
Figure PCTCN2017080545-appb-000006
第二标记
Figure PCTCN2017080545-appb-000007
用于表征第一图像已分享。
另一种可行的实施例,接收用户在显示屏上操作预设用户接口触发的第三指令;根据第三指令,将已获取的满足预设分享条件的第一图像与各第一图像关联的第一地理位置发送给网络设备,以使网络设备进行各第一图像的分享。第三指令用于指示将满足预设分享条件的第一图像与各第一图像关联的第一地理位置发送给网络设备。第三指令例如可以是用于分享满足预设条件的第一图片的分享指令。
具体地,该满足预设分享条件的第一图像包括如下可行的实现方式。
一种可行的实现方式,显示屏所显示的地图界面上已显示的第一标记所关联的至少一个第一图像。
具体地,在地图界面上显示有很多第一标记,用户对自己感兴趣的地理位置进行浏览,此时,在显示屏上只显示地图界面的一部分,若用户操作预设用户接口,则触发第三指令,车机在获取到该第三指令后,将显示屏所显示的地图界面上显示的与第一标记关联的至少一个第一图像进行分享。本领域技术人员可以理解,本实施例可以将第一标记关联的所有的第一图像发送给网络设备,也可以将至少一个第一图像发送给网络设备。
另一种可行的实现方式,拍摄位置位于第一地理范围的至少一个第一图像。该第一地理范围可以为用户预先设置的,也可以为系统默认的。车机可以将拍摄位置位于第一地理范围的所有第一图像发送给网络设备,也可以将至少一个第一图像发送给网络设备。
图8为本申请一实施例提供的用户界面状态变化示意图。如图8所示,在显示屏上还显示有按钮控件
Figure PCTCN2017080545-appb-000008
当用户点击该按钮控件
Figure PCTCN2017080545-appb-000009
时,车机接收用户触发的第三指令,然后根据该第三指令,将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备。
上述仅示意性的列出了预设分享条件可行的实现方式,该预设分享条件还可以为其它条件,例如,该预设分享条件还可以为已获取的所有未分享的第一图像,或者拍摄时间满足预设条件的第一图像等。
本领域技术人员可以理解,满足预设分享条件的第一图像不包括已经分享的第一图 像。
可选地,在本实施例中,当满足预设分享条件第一图像分享给其它用户时,将第一标记
Figure PCTCN2017080545-appb-000010
替换为第二标记
Figure PCTCN2017080545-appb-000011
第二标记
Figure PCTCN2017080545-appb-000012
用于表征第一图像已分享。
本实施例通过将满足预设分享条件的第一图像分享给其它用户,使得用户能够快速便捷的将大批量的第一图像分享给其它用户。
其次,在说明如何获取其它终端设备分享的第二图像,即从车机的角度,来说明车机获取其它终端设备分享的第二图像。
在一个具体的实施例中,接收服务器发送的第二图像和与第二图像关联的第二地理位置,在地图的第二地理位置上添加第三标记,用于表征第二地理位置关联有第二图像。
可选地,在接收服务器发送的第二图像和与第二图像关联的第二地理位置之前,车机还向服务器上报地理位置信息。车机向服务器上报地理位置信息的实现方式如下:
一种可行的实现方式,向服务器上报地理位置信息,地理位置信息包括:第二地理位置;对应地,第二图像是其它终端设备在第二地理位置拍摄的图像。
具体地,该第二地理位置可以是车机实时向服务器发送的车辆当前所处的地理位置,也可以是车机按照预设周期向服务器发送的车辆当前所处的地理位置,还可以是车机向服务器发送的用户选择的地理位置。
另一种可行的实现方式,向服务器上报地理位置信息,地理位置信息包括:第三地理位置,第三地理位置用于使服务器确定第三地理位置对应的第二地理范围;对应地,第二图像是其它终端设备在第三地理位置对应的第二地理范围内拍摄的图像,第二地理位置是第二图像的拍摄位置。
具体地,该第三地理位置可以为车辆当前所处的地理位置,也可以为用户选择的地理位置。车机将该第三地理位置发送给服务器之后,服务器确定该第三地理位置对应的第二地理范围,该第二地理范围可以通过用户来预先设定,也可以为车机默认的。该第二地理范围,具体可以是第三地理位置及其附近的区域。例如,该第二地理范围具体可以是以第三地理位置为中心,预设距离为半径,所覆盖的区域;还可以是第三地理位置所处的行政区域,例如,第三地理位置位于上海市淮海东路,则第二地理范围为上海市黄浦区。对于第二地理范围的划分方式,本实施例此处不做特别限制。需要说明的是,该第二地理位置位于该第二地理范围内,该第二地理位置是第二图像的拍摄位置,该第二地理位置也可能与第三地理位置为同一地理位置。
又一种可行的实现方式,向服务器上报地理位置信息,地理位置信息包括:第二地理范围;对应地,第二图像是其它终端设备在第二地理范围内拍摄的图像,第二地理位置是第二图像的拍摄位置。
在获取到第二图像和第二地理位置之后,将第二图像与第二地理位置进行关联,具体的关联的方式,可参见第一图像与第一地理位置进行关联的方式,本实施例此处不再赘述。
然后,在地图的第二地理位置上添加第三标记,用于表征第二地理位置关联有第二图像。图9为本申请一实施例提供的用户界面状态示意图。如图9所示,在添加第三标记之后,车机在地图界面的第二地理位置上显示第三标记当用户看到该第三标记时,用户即可获知其它用户在此地理位置分享了第二图像。
可选地,用户还可以查看第二图像。具体地,车机接收用户操作已显示的第三标记触发的第四指令;根据第四指令,在显示屏上显示一内容显示界面,内容显示界面中显示有与第二地理位置关联的至少一个第二图像。其中,第四指令用于指示在地图界面上显示一内容显示界面。第四指令例如可以是用于查看第二图片的查看指令。用户查看第二图像的方式与查看第一图像的方式类似,本实施例此处不再赘述,仅以一个具体的例子为例。
图10为本申请一实施例提供的用户界面状态变化示意图。如图10所示,在用户想要查看该第二地理位置关联的第二图像时,用户点击该第三标记
Figure PCTCN2017080545-appb-000014
车机接收用户点击第三标记触发的第四指令。然后车机在显示屏所显示的地图界面上的中间位置显示一内容显示界面,内容显示界面中显示有与第二地理位置关联的至少一个第二图像。
本申请通过标记其它用户分享的第二地理位置,并显示该第二地理位置关联的第二图像,使得用户可以获知第二地理位置有哪些有趣的地方,为用户的出行和旅游提供了参考,使得用户可以在一次出行中,不错过旅途中风景和趣事等。
需要说明的是,上述各种标记符号,仅为示例性的标记符号,并不形成对本申请的限制,对于其它的标记符号,也可以应用到本申请中。
下面以图像的处理装置被实现到服务器的基础架构为例,以服务器作为执行主体,从服务器的角度,来说明服务器与终端设备的交互,以实现分享过程。在本实施例中,以该终端设备为车机、该交通工具为车辆为例,进行详细说明。
图11为本申请一实施例提供的图像的处理方法的信令流程示意图。如图11所示, 该流程包括:
S11、车机向服务器发送第一地理位置。
本实施例中的第一地理位置,具体可以是车机实时向服务器发送的车辆所处的第一地理位置,也可以是车机按照预设周期向服务器发送的车辆所处的第一地理位置,也可以是车机向服务器发送的用户感兴趣的第一地理位置。
S12、服务器确定第一图像,第一图像是其它车机在第一地理位置对应的第一地理范围内拍摄的图像。
第一地理范围的实现方式,可参见上述实施例中所描述的地理范围,此处不再赘述,本领域技术人员可以理解,该确定的第一图像,可以为所有其它车机在第一地理位置对应的第一地理范围内拍摄的所有图像,也可以为部分其它车机在第一地理位置对应的第一地理范围内拍摄的图像,还可以为至少一个车机在第一地理位置对应的第一地理范围内拍摄的至少一个图像。
可选地,本实施例在S12之前,还可以包括S10A和S10B,S10A和S10B具体为:
S10A、其它车机向服务器发送第一图像以及与第一图像关联的第二地理位置。
需要说明的是,第二地理位置即第一图像的拍摄位置,第二地理位置位于该第一地理范围内。
S10B、服务器将第一图像与第二地理位置进行关联。
S13、服务器向车机发送第一图像以及与该第一图像关联的第二地理位置。
服务器向车机发送第一图像以及第二地理位置,车机可以将第一图像和第二地理位置进行关联。其中,第二地理位置为第一图像的拍摄位置。
本实施例的服务器确定存在其它车机在第一地理位置对应的第一地理范围内拍摄的第一图像,并将第一图像以及该第一图像的拍摄位置第二地理位置发送给车机,使得使用该车机的用户可以通过该车机方便快捷的获知当前地理位置周围用户分享的第一图像,以及第一图像关联的第二地理位置。同时该第一图像和第二地理位置可以及时为用户的出行或游玩提供参考,提高了用户出行的趣味性。
图12为本申请一实施例提供的图像的处理方法的信令流程图。该流程包括:
S21、车机向服务器发送车辆所处的第一地理位置。
S21与S11的实现过程类似,具体可参见上述实施例,本实施例此处不再赘述。
S22、服务器在第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置。
该第二地理范围与第一地理范围的确定方式类似,本实施例此处不再赘述。需要说明的是,该第二地理范围与第一地理范围,可以相同,也可以不同。
预设拍摄条件可以为该地理位置的被拍摄次数或频次高于预设值,或者还可以为该地理位置在互联网社交社区中,被评论次数超过预设值,或者该地理位置为国家级风景区等。
服务器对该第二地理范围内的各地理位置进行查看,以确定是否存在满足预设拍摄条件的第三地理位置。
S23、服务器向车机发送拍摄信息,拍摄信息包括第三地理位置,该拍摄信息用于指示当车辆位于第三地理位置时,需要启动摄像设备的拍摄功能。
本实施例通过在第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置,并向终端设备发送具有拍摄价值的包括该第三地理位置的拍摄信息,使得终端设备可以实现自动拍摄,不错过美好瞬间。
下面以一个具体的实施例为例,来说明本申请如何确定第三地理位置。
图13为本申请一实施例提供的图像的处理方法的信令流程图。该流程包括:
S31、车机向服务器发送车辆所处的第一地理位置。
S31与S11的实现过程类似,具体可参见上述实施例,本实施例此处不再赘述。
S32、服务器根据其它车机发送的第二图像以及与第二图像关联的第四地理位置,确定各第四地理位置的被拍摄信息。
需要说明的是,第二图像为其它车机在第一地理位置对应的第二地理范围内拍摄的图像,第四地理位置为第二图像的拍摄位置。
可选地,在S32之前,还包括S30A与S30B,具体如下:
S30A、其它车机向服务器发送第二图像以及与第二图像关联的第四地理位置。
S30B、服务器将第二图像与第四地理位置进行关联。
S33、服务器根据各第四地理位置的被拍摄信息,在第四地理位置中确定满足预设拍摄条件的第三地理位置。
被拍摄信息具体可以为第四地理位置在预设时间段内被拍摄的频次或概率等。预设拍摄条件具体可以为拍摄的频次或概率大于预设值,被拍摄的频次或概率排名在预设排名之前等。
S34、服务器向车机发送拍摄信息,拍摄信息包括第三地理位置,拍摄信息用于指示当车辆位于第三地理位置时,需要启动摄像设备的拍摄功能。
服务器对该第二地理范围内的各地理位置进行查看,来确定是否存在满足预设拍摄条件的第三地理位置。
本实施例通过在第一地理位置对应的第二地理范围内,对其它终端设备拍摄的第二图像以及关联的第四地理位置进行分析,提取被拍摄数量多的第三地理位置,可以获取比较有价值的拍摄位置,并向终端设备发送具有拍摄价值的包括该第三地理位置的拍摄信息,使得终端设备可以实现自动拍摄,不错过美好瞬间。
以下将详细描述根据本申请的一个或多个实施例的图像的处理装置。这些图像的处理装置可以被实现在交通工具或终端设备的基础架构中,也可以被实现在服务器和客户端的交互系统中。本领域技术人员可以理解,这些图像的处理装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。例如,处理器组件(或处理模块、处理单元)可以使用来自德州仪器公司、英特尔公司、ARM公司、等企业的单片机、微控制器、微处理器等组件。
图14为本申请一实施例提供的图像的处理装置结构示意图。如图14所示,该装置包括:
输入模块1401,用于获取当前场景对应的第一图像;
关联模块1402,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
标记模块1403,用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
本实施例提供的图像的处理装置,可用于执行上述方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图15为本申请一实施例提供的图像的处理装置结构示意图。本实施例在图14实施例的基础上实现,具体如下:
可选地,还包括:第一显示模块1404,用于在显示屏所显示的地图界面的预设位置处显示至少一个第一图像。
可选地,还包括:第二显示模块1405,用于在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记。
可选地,所述输入模块1401还用于,接收用户操作所述第一标记触发的第一指令;
所述第二显示模块1405还用于,根据所述第一指令,在所述显示屏所显示的地图界面上显示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至少 一个第一图像。
可选地,还包括:第一输出模块1406,
所述输入模块1401还用于,接收所述用户操作所述内容显示界面触发的第二指令;
所述第一输出模块1406,用于根据所述第二指令,将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
可选地,所述标记模块1403,还用于将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
可选地,还包括:第二输出模块1407;
所述输入模块1401还用于,接收用户操作预设用户接口触发的第三指令;
所述第二输出模块1407,用于根据所述第三指令,将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
可选地,所述输入模块1401还用于,接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
所述标记模块1403还用于,在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
可选地,还包括:第三输出模块1408,
所述第三输出模块1408用于:
向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理位置;
对应地,所述第二图像是其它终端设备在所述第二地理位置拍摄的图像;
或者
向所述服务器上报地理位置信息,所述地理位置信息包括:第三地理位置,所述第三地理位置用于使所述服务器确定第三地理位置对应的第二地理范围;
对应地,所述第二图像是其它终端设备在所述第三地理位置对应的第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置;
或者
向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理范围;
对应地,所述第二图像是其它终端设备在所述第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置。
可选地,还包括:第三显示模块1409,
所述输入模块1401还用于,接收用户操作已显示的所述第三标记触发的第四指令;
所述第三显示模块1409,用于根据所述第四指令,在所述显示屏上显示一内容显示界面,所述内容显示界面中显示有与所述第二地理位置关联的第二图像。
可选地,还包括:拍摄模块1410和第四输出模块1411;
所述拍摄模块1410,用于确定需要启动摄像设备的拍摄功能;
所述第四输出模块1411,用于向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
所述输入模块1401具体用于:获取所述摄像设备对当前场景进行拍摄得到的第一图像。
可选地,所述拍摄模块1410具体用于,
接收传感设备发送的传感数据,根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,在交通工具所处的地理位置为所述待拍摄的地理位置时,确定需要启动所述摄像设备的拍摄功能;或者
接收用户输入的语音信号,根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
接收第三图像,对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
接收用户通过硬件设备触发的控制信号,根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
本实施例提供的装置,可用于执行上述方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图16为本申请一实施例提供的图像的处理装置结构示意图。如图16所示,该装置包括:
输入模块1601,用于接收终端设备发送的第一地理位置;
处理模块1602,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
输出模块1603,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的 第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
本实施例提供的图像的处理装置,可用于执行上述方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图17为本申请一实施例提供的图像的处理装置结构示意图。本实施例在图17实施例的基础上,
所述输入模块1601还用于,接收其它终端设备发送的所述第一图像以及与所述第一图像关联的第二地理位置;
所述处理模块1602还用于,用于将所述第一图像与所述第二地理位置进行关联。
可选地,还包括:拍摄模块1604,
所述拍摄模块1604,用于在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置;
所述输出模块1603还用于,向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当交通工具位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
可选地,所述输入模块1601还用于,接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
所述拍摄模块1604具体用于,根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息;根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
本实施例提供的装置,可用于执行上述方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图18为本申请一实施例提供的图像的处理设备的硬件结构示意图。如图18所示,本实施例提供的设备包括:输入设备181、处理器182、输出设备183、显示屏184、存储器185和至少一个通信总线186。通信总线186用于实现元件之间的通信连接。存储器185可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,存储器185中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。
可选的,上述处理器182例如可以为中央处理器(Central Processing Unit,简称CPU)、应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、 可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,该处理器182通过有线或无线连接耦合到上述输入设备181和输出设备183。
可选的,上述输入设备181可以包括多种输入设备,例如可以包括面向用户的用户接口、面向设备的设备接口、软件的可编程接口、收发信机中的至少一个。可选的,该面向设备的设备接口可以是用于设备与设备之间进行数据传输的有线接口、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的用户接口例如可以是面向用户的控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的可编程接口例如可以是供用户编辑或者修改程序的入口,例如芯片的输入引脚接口或者输入接口等;可选的,上述收发信机可以是具有通信功能的射频收发芯片、基带处理芯片以及收发天线等。
可选的,该图像处理的设备可以是用于交通工具的图像处理的设备,例如,可以是用于车辆的图像处理的设备、用于飞行器的图像处理的设备、用于水路运输工具的图像处理的设备等。关于用于交通工具的图像处理的设备的具体内容,本申请提供了另一实施例来进行介绍,请参见后面的实施例,在此不再详述。
可选地,所述输入设备181,用于获取当前场景对应的第一图像;
所述处理器182,耦合到所述输入设备181,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
所述处理器182,还用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
可选地,还包括:显示屏184,所述显示屏184耦合到所述处理器182,所述处理器182还用于控制所述显示屏184在所显示的地图界面的预设位置处显示至少一个第一图像。
可选地,还包括:显示屏184,所述显示屏184耦合到所述处理器182,所述处理器182还用于控制所述显示屏184在所显示的地图界面的所述第一地理位置上显示所述第一标记。
可选地,所述输入设备181还用于,接收用户操作所述第一标记触发的第一指令;
所述处理器182还用于,根据所述第一指令,控制所述显示屏184在所显示的地图界面上显示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至 少一个第一图像。
可选地,还包括:输出设备183,所述输出设备183耦合到所述处理器182;
所述输入设备181还用于,接收所述用户操作所述内容显示界面触发的第二指令;
所述处理器182还用于根据所述第二指令,控制所述输出设备183将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
可选地,所述处理器182还用于,将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
可选地,还包括:输出设备183,所述输出设备183耦合到所述处理器182;
所述输入设备181还用于,接收用户操作预设用户接口触发的第三指令;
所述处理器182还用于,根据所述第三指令,控制所述输出设备183将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
可选地,所述输入设备181还用于,接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
所述处理器182还用于,在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
可选地,还包括:输出设备183,所述输出设备183耦合到所述处理器182;
所述处理器182,还用于确定需要启动摄像设备的拍摄功能;
所述输出设备183,用于向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
所述输入设备181具体用于,获取所述摄像设备对当前场景进行拍摄得到的第一图像。
可选地,所述输入设备181还用于接收传感设备发送的传感数据,所述处理器182还用于根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
所述输入设备181还用于接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,所述处理器182还用于在交通工具所处的地理位置为所述待拍摄的地理位置时,确定需要启动所述摄像设备的拍摄功能;或者
所述输入设备181还用于接收用户输入的语音信号,所述处理器182还用于根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
所述输入设备181还用于接收第三图像,所述处理器182还用于对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
所述输入设备181还用于接收用户通过硬件设备触发的控制信号,所述处理器182还用于根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
本实施例提供的设备,可用于执行上述图2至图10所述的方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图19为本申请一实施例提供的图像的处理设备的硬件结构示意图。如图19所示,该设备可以包括输入设备191、处理器192、输出设备193、存储器194和至少一个通信总线195。通信总线195用于实现元件之间的通信连接。存储器194可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,存储器194中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。
可选地,所述输入设备191,用于接收终端设备发送的第一地理位置;
所述处理器192,耦合到所述输入设备191和所述输出设备193,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
所述输出设备193,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
可选地,所述处理器192,还用于在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置;
所述输出设备193,还用于向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当车辆位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
可选地,所述输入设备191,还用于接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
所述处理器192,还用于根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息,根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
本实施例提供的设备,可用于执行上述图11至图13所述的方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
图20为本申请一实施例提供的图像的处理设备的硬件结构示意图。图20是对图18在实现过程中的一个具体的实施例。该图像的处理设备例如可以是终端设备。如图20所示,本实施例的图像的处理设备包括处理器11以及存储器12。
处理器11执行存储器12所存放的计算机程序代码,实现上述实施例中图2至图10的图像的处理方法。
可选地,处理器11设置在处理组件10中。该图像的处理设备还可以包括:通信组件13,电源组件14,多媒体组件15,音频组件16,输入/输出接口17,传感器组件18。
处理组件10通常控制图像的处理设备的整体操作。处理组件10可以包括一个或多个处理器11来执行指令,以完成图2至图10的方法的全部或部分步骤。此外,处理组件10可以包括一个或多个模块,便于处理组件10和其他组件之间的交互。例如,处理组件10可以包括多媒体模块,以方便多媒体组件15和处理组件10之间的交互。
电源组件14为图像的处理设备的各种组件提供电力。电源组件14可以包括电源管理系统,一个或多个电源,及其他与为图像的处理设备生成、管理和分配电力相关联的组件。
多媒体组件15包括在图像的处理设备和用户之间的提供一个输出接口的显示屏。该显示屏可以显示上述实施例中的地图界面。该显示屏包括触摸面板,显示屏可以被实现为触摸屏,以接收来自用户操作用户接口输入的指令。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
音频组件16被配置为输出和/或输入音频信号。例如,音频组件16包括一个麦克风(MIC),当图像的处理设备处于操作模式,如语音识别模式时,麦克风被配置为接收外部音频信号,例如上述的“斑马”。所接收的音频信号可以被进一步存储在存储器12或经由通信组件13发送。在一些实施例中,音频组件16还包括一个扬声器,用于输出音频信号。
输入/输出接口17为处理组件10和外围接口模块之间提供接口,上述外围接口模块可以是点击轮,按钮等。这些按钮可包括但不限于:音量按钮、启动按钮和锁定按钮。
传感器组件18包括一个或多个传感器,用于为图像的处理设备提供各个方面的状态评估。例如,传感器组件18可以检测到图像的处理设备的打开/关闭状态,组件的相对定位,用户与图像的处理设备接触的存在或不存在。传感器组件18可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。在一些实施例中,该传感器 组件18还可以包括加速度传感器,陀螺仪传感器,重力传感器等。
通信组件13被配置为便于图像的处理设备和其他设备之间有线或无线方式的通信。图像的处理设备可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个实施例中,该图像的处理设备中可以包括SIM卡插槽,该SIM卡插槽用于插入SIM卡,使得图像的处理设备可以登录GPRS网络,通过互联网与服务器建立通信。
在示例性实施例中,图像的处理设备可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在上述图19所示实施例的图像的处理设备的描述的基础上,本申请还提供了另一实施例,本申请具体公开了一种用于交通工具的图像处理的设备。可选的,该用于交通工具的图像处理的设备可以为车机设备、交通工具出厂后附加的设备等等。
具体地,该用于交通工具的图像处理的设备可以包括:机载输入设备、机载处理器;可选地,还可以包括机载输出设备以及其他附加设备。
需要说明的是,本申请实施例所涉及的“机载输入设备”、“机载输出设备”、“机载处理器”中的机载,可以是承载于车辆上的“车载输入设备”、“车载输出设备”以及“车载处理器”,还可以是承载于飞行器上的“机载输入设备”、“机载输出设备”、“机载处理器”,还可以是承载于其他类型交通工具上的设备,本申请实施例对“机载”的含义并不做限定。以交通工具是车辆为例,该机载输入设备可以是车载输入设备、机载处理器可以是车载处理器、机载输出设备可以是车载输出设备。
取决于所安装的交通工具的类型的不同,上述车载输入设备可以包括多种输入设备,例如可以包括面向用户的车载用户接口、面向设备的车载设备接口、软件的车载可编程接口、收发信机中的至少一个。可选的,该面向设备的车载设备接口可以是用于设备与设备之间进行数据传输的有线接口(例如车辆的中控台上的与行车记录仪的连接接口)、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的车载用户接口例如可以是用于车辆的方向盘控制按键、用于大型车辆或小型车辆的中控控制按键、用于接收语音输入的语音输入设备(例如,安置在方向盘或操作舵上的麦克风、中央声音采集设备、等等)、以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的车载可编程接口例如可以是车辆控制系统中可供用户编辑或者修改的入口,例如车辆中涉 及的大、小芯片的输入引脚接口或者输入接口等;可选的,上述收发信机可以是车辆中具有通信功能的射频收发芯片、基带处理芯片以及收发天线等。按照上述图2至图10对应的实施例中的方法,该机载输入设备用于获取当前场景对应的第一图像。对应地,当该用于交通工具的图像处理的设备为车辆上的中控单元或者其他设备时,该车载输入设备可以是与车辆内部的各个业务来源进行通信的设备传输接口,还可以是具有通信功能的收发信机。该机载输入设备还用于接收用户触发的各种指令。对应地,当该用于交通工具的图像处理的设备为车辆上的中控单元或者其他设备时,该车载输入设备可以是用于车辆的方向盘控制按键、用于大型车辆或小型车辆的中控控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等)等。
取决于所安装的交通工具的类型的不同,上述机载处理器可以使用各种应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、中央处理器(CPU)、控制器、微控制器、微处理器或其他电子元件实现,并用于执行上述方法。上述机载处理器通过车内线路或无线连接耦合到上述机载输入设备和机载输出设备。上述机载处理器可以执行上述图2至10对应的实施例中的方法。
取决于所安装的交通工具的类型的不同,上述机载输出设备可以是与用户的手持设备等建立无线传输的收发信机,也可以是交通工具上的各种显示装置。该显示装置可以为业内使用的各种显示设备,也可以是具有投影功能的平视显示器。本实施例的机载输出设备,可以执行上述图2至图10对应的实施例中的方法。
图21为本申请一实施例提供的图像的处理设备的硬件结构示意图。图21是对图19在实现过程中的一个具体的实施例。该图像的处理设备例如可以是服务器,如图21所示,本实施例提供的图像的处理设备包括处理器以及存储器22。可选地,处理器设置在处理组件20中。
处理器执行存储器22所存放的计算机程序代码,实现上述实施例中图11至图13所示的图像的处理方法。
可选地,该图像的处理设备还可以包括:电源组件23,网络接口24,输入/输出接口25。
其中,处理组件20,其进一步包括一个或多个处理器,以及由存储器22所代表的存储器资源,用于存储可由处理组件20的执行的指令,例如应用程序。存储器22中存 储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件20被配置为执行指令,以执行上述图11至图13实施例中的图像的处理方法。
图像的处理设备还可以包括一个电源组件23被配置为执行图像的处理设备的电源管理,一个有线或无线网络接口24被配置为将图像的处理设备连接到网络,和一个输入输出(I/O)接口25。图像的处理设备可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
图22为本申请一实施例提供的用户界面系统的结构示意。如图22所示,包括:
显示组件2201,用于显示地图界面;
处理器2202,用于触发所述显示组件2202在地图界面的第一地理位置上显示第一标记,用于表征第一地理位置关联有所述第一图像。
本实施例提供的地图界面状态示意图具体可如上述图3所示,在地图界面的第一地理位置上显示有第一标记
Figure PCTCN2017080545-appb-000015
可选地,所述处理器2202,还用于触发所述显示组件2201在所述地图界面的预设位置处显示与所述第一地理位置关联的第一图像。具体可如上述图5所示,在该地图界面上,显示有第一标记,同时在该地图界面的右下角还显示第一图像。
可选地,所述处理器2202,还用于触发所述显示组件2201在所述地图界面的第二地理位置上显示第二标记,所述第二标记用于表征所述第二地理位置关联的第二图像已分享。具体可如上述图7右侧的用户界面示意图所示,在地图界面的第二地理位置上还显示有第二标记
Figure PCTCN2017080545-appb-000016
可选地,所述处理器2202,还用于触发所述显示组件2201在所述地图界面的第三地理位置上显示第三标记,用于表征所述第三地理位置关联有其它用户分享的第三图像。具体可如上述图9所示,在地图界面的第三地理位置上还显示有第三标记
Figure PCTCN2017080545-appb-000017
可选地,所述处理器2202,还用于基于用户操作,触发所述显示组件2201在所述地图界面上显示一内容显示界面2201,所述内容显示界面2201上显示有与各所述标记对应的图像。具体可如上述图6以及图10所示。
可选地,所述处理器2202,还用于触发所述显示组件2201在所述内容显示界面2201上显示用户接口,所述用户接口用于用户触发各种指令。具体可如上述图7所示的“丢图钉”图标。
本实施例提供的用户界面系统,可以在地图界面上向用户形象直观的展示被拍摄图 像的地理位置,以及是否存在其它用户分享的图像。并且,用户在地图界面上对用户接口进行操作,就可以将该图像分享给其它用户,操作流程简单,有利于用户在驾驶环境下,快速将已拍摄的图像分享给其它用户,提高用户驾驶的安全性。
本申请还提供一种车载互联网操作系统。本领域技术人员可以理解,该车载互联网操作系统可以管理和控制上述图18或图20所示的图像处理的设备的硬件或者本申请所涉及的用于交通工具的图像处理的设备的硬件以及本申请所涉及的软件资源的计算机程序,是直接运行在上述设备上的系统软件。该操作系统是用户与上述设备的接口,也是硬件与其它软件的接口。
本申请提供的车载互联网操作系统,可以与车辆上的其他模块或功能设备进行交互,以控制相应模块或功能设备的功能。
具体地,以上述实施例中的交通工具为车辆、该图像处理的设备为车载终端设备为例,基于本申请提供的车载互联网操作系统以及车辆通信技术的发展,使得车辆不再独立于通信网络以外,车辆可以通过车载终端设备与服务端互相连接起来组成网络,从而形成车载互联网。该车载互联网系统可以提供语音通信服务、定位服务、导航服务、移动互联网接入、车辆紧急救援、车辆数据和管理服务、车载娱乐服务等。
下面详细说明本申请提供的车载互联网操作系统的结构示意图。图23为本申请一实施例提供的车载互联网操作系统的结构示意图。如图23所示,本申请提供的操作系统包括:图像控制单元231和关联控制单元232。
图像控制单元231,控制车载输入设备获取当前场景对应的第一图像;
关联控制单元232,获取所述第一图像对应的第一地理位置,得到在所述第一地理位置上添加有第一标记的地图,用于表征第一地理位置关联有第一图像,其中,所述添加有第一标记的地图是在原始地图的第一地理位置上添加第一标记得到的。
具体地,本实施例中的车载输入设备可以包括上述实施例中的输入设备,图像控制单元231可以控制车载输入设备获取当前场景对应的第一图像。
具体地,关联控制单元232可以通过图像处理系统在原始地图的第一地理位置上添加第一标记。其中,图像处理系统可以是操作系统实现的功能,或者,该图像处理系统可以是上述实施例中的处理器实现的功能。
进一步地,该车载互联网操作系统可以通过上述的图像控制单元231以及关联控制单元232,或者在上述两种单元的基础上,结合其它单元,控制相应的组件以执行上述图2至图10所述的方法。
本申请还提供一种处理器可读存储介质,存储介质中存储有程序指令,程序指令用于使图像的处理设备的处理器执行上述图2至图10实施例中的图像的处理方法。
本申请还提供一种处理器可读存储介质,存储介质中存储有程序指令,程序指令用于使图像的处理设备的处理器执行上述图11至图13实施例中的图像的处理方法。
上述可读存储介质可以是由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (61)

  1. 一种图像的处理方法,其特征在于,包括:
    获取当前场景对应的第一图像;
    获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
    在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取当前场景对应的第一图像之后,还包括:
    在显示屏所显示的地图界面的预设位置处显示至少一个第一图像。
  3. 根据权利要求1所述的方法,其特征在于,所述在地图的所述第一地理位置上添加第一标记之后,还包括:
    在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记。
  4. 根据权利要求3所述的方法,其特征在于,在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记之后,还包括:
    接收用户操作所述第一标记触发的第一指令;
    根据所述第一指令,在所述显示屏所显示的地图界面上显示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至少一个第一图像。
  5. 根据权利要求4所述的方法,其特征在于,在所述显示屏所显示的地图界面上显示一内容显示界面之后,还包括:
    接收所述用户操作所述内容显示界面触发的第二指令;
    根据所述第二指令,将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备之后,还包括:
    将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收用户操作预设用户接口触发的第三指令;
    根据所述第三指令,将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
  8. 根据权利要求7所述的方法,其特征在于,所述满足预设分享条件的第一图像,包括:
    显示屏所显示的地图界面上已显示的第一标记所关联的至少一个第一图像;或者
    拍摄位置位于第一地理范围的至少一个第一图像。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
    在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
  10. 根据权利要求9所述的方法,其特征在于,所述接收服务器发送的第二图像和与所述第二图像关联的第二地理位置之前,还包括:
    向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理位置;
    对应地,所述第二图像包括其它终端设备在所述第二地理位置拍摄的图像;
    或者
    向所述服务器上报地理位置信息,所述地理位置信息包括:第三地理位置,所述第三地理位置用于使所述服务器确定第三地理位置对应的第二地理范围;
    对应地,所述第二图像包括其它终端设备在所述第三地理位置对应的第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置;
    或者
    向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理范围;
    对应地,所述第二图像包括其它终端设备在所述第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置。
  11. 根据权利要求9所述的方法,其特征在于,在所述地图的所述第二地理位置上添加第三标记之后,还包括:
    接收用户操作已显示的所述第三标记触发的第四指令;
    根据所述第四指令,在所述显示屏上显示一内容显示界面,所述内容显示界面中显示有与所述第二地理位置关联的至少一个第二图像。
  12. 根据权利要求1所述的方法,其特征在于,所述获取当前场景对应的第一图像之前,还包括:
    确定需要启动摄像设备的拍摄功能;
    向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
    所述获取当前场景对应的第一图像,包括:
    获取所述摄像设备对当前场景进行拍摄得到的第一图像。
  13. 根据权利要求12所述的方法,其特征在于,所述确定需要启动所述摄像设备的拍摄功能,包括:
    接收传感设备发送的传感数据,根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
    接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,在交通工具所处的地理位置为所述待拍摄的地理位置时,确定需要启动所述摄像设备的拍摄功能;或者
    接收用户输入的语音信号,根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
    接收第三图像,对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
    接收用户通过硬件设备触发的控制信号,根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
  14. 一种图像的处理方法,其特征在于,包括:
    接收终端设备发送的第一地理位置;
    确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
    向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
  15. 根据权利要求14所述的方法,其特征在于,所述确定第一图像之前,还包括:
    接收其它终端设备发送的所述第一图像以及与所述第一图像关联的第二地理位置;
    将所述第一图像与所述第二地理位置进行关联。
  16. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置;
    向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当交通工具位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
  17. 根据权利要求16所述的方法,其特征在于,在所述第一地理位置对应的第二地 理范围内,确定满足预设拍摄条件的第三地理位置之前,还包括:
    接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
    在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置,包括:
    根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息;
    根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
  18. 根据权利要求17所述的方法,其特征在于,所述被拍摄信息包括:各所述第四地理位置在预设时间段内被拍摄的频次;
    所述预设拍摄条件为被拍摄的频率大于预设值,或者被拍摄的频率在预设排名之前。
  19. 一种图像的处理装置,其特征在于,包括:
    输入模块,用于获取当前场景对应的第一图像;
    关联模块,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
    标记模块,用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
  20. 根据权利要求19所述的装置,其特征在于,还包括:第一显示模块,用于在显示屏所显示的地图界面的预设位置处显示至少一个第一图像。
  21. 根据权利要求19所述的装置,其特征在于,还包括:第二显示模块,用于在显示屏所显示的地图界面的所述第一地理位置上显示所述第一标记。
  22. 根据权利要求21所述的装置,其特征在于,所述输入模块还用于,接收用户操作所述第一标记触发的第一指令;
    所述第二显示模块还用于,根据所述第一指令,在所述显示屏所显示的地图界面上显示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至少一个第一图像。
  23. 根据权利要求22所述的装置,其特征在于,还包括:第一输出模块,
    所述输入模块还用于,接收所述用户操作所述内容显示界面触发的第二指令;
    所述第一输出模块,用于根据所述第二指令,将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
  24. 根据权利要求23所述的装置,其特征在于,所述标记模块,还用于将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
  25. 根据权利要求19所述的装置,其特征在于,还包括:第二输出模块;
    所述输入模块还用于,接收用户操作预设用户接口触发的第三指令;
    所述第二输出模块,根据所述第三指令,将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
  26. 根据权利要求19至25任一项所述的装置,其特征在于,所述输入模块还用于,接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
    所述标记模块还用于,在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
  27. 根据权利要求26所述的装置,其特征在于,还包括:第三输出模块,
    所述第三输出模块用于:
    向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理位置;
    对应地,所述第二图像是其它终端设备在所述第二地理位置拍摄的图像;
    或者
    向所述服务器上报地理位置信息,所述地理位置信息包括:第三地理位置,所述第三地理位置用于使所述服务器确定第三地理位置对应的第二地理范围;
    对应地,所述第二图像是其它终端设备在所述第三地理位置对应的第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置;
    或者
    向所述服务器上报地理位置信息,所述地理位置信息包括:第二地理范围;
    对应地,所述第二图像是其它终端设备在所述第二地理范围内拍摄的图像,所述第二地理位置是所述第二图像的拍摄位置。
  28. 根据权利要求26所述的装置,其特征在于,还包括:第三显示模块,
    所述输入模块还用于,接收用户操作已显示的所述第三标记触发的第四指令;
    所述第三显示模块,用于根据所述第四指令,在所述显示屏上显示一内容显示界面,所述内容显示界面中显示有与所述第二地理位置关联的第二图像。
  29. 根据权利要求19所述的装置,其特征在于,还包括:拍摄模块和第四输出模块;
    所述拍摄模块,用于确定需要启动摄像设备的拍摄功能;
    所述第四输出模块,用于向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
    所述输入模块具体用于,获取所述摄像设备对当前场景进行拍摄得到的第一图像。
  30. 根据权利要求29所述的装置,其特征在于,所述拍摄模块具体用于,
    接收传感设备发送的传感数据,根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
    接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,在交通工具所处的地理位置为所述待拍摄的地理位置时,确定需要启动所述摄像设备的拍摄功能;或者
    接收用户输入的语音信号,根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
    接收第三图像,对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
    接收用户通过硬件设备触发的控制信号,根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
  31. 一种图像的处理装置,其特征在于,包括:
    输入模块,用于接收终端设备发送的第一地理位置;
    处理模块,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
    输出模块,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
  32. 根据权利要求31所述的装置,其特征在于,所述输入模块还用于,接收其它终端设备发送的所述第一图像以及与所述第一图像关联的第二地理位置;
    所述处理模块还用于,将所述第一图像与所述第二地理位置进行关联。
  33. 根据权利要求31所述的装置,其特征在于,还包括:拍摄模块,
    所述拍摄模块,用于在所述第一地理位置对应的第二地理范围内,确定满足预设拍 摄条件的第三地理位置;
    所述输出模块还用于,向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当交通工具位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
  34. 根据权利要求33所述的装置,其特征在于,所述输入模块还用于,接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
    所述拍摄模块具体用于,根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息;根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
  35. 根据权利要求34所述的装置,其特征在于,所述被拍摄信息包括:各所述第四地理位置在预设时间段内被拍摄的频次;
    所述预设拍摄条件为被拍摄的频率大于预设值,或者被拍摄的频率在预设排名之前。
  36. 一种图像的处理设备,其特征在于,包括:输入设备和处理器;
    所述输入设备,用于获取当前场景对应的第一图像;
    所述处理器,耦合到所述输入设备,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
    所述处理器,还用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
  37. 根据权利要求36所述的设备,其特征在于,还包括:显示屏,所述显示屏耦合到所述处理器,所述处理器还用于控制所述显示屏在所显示的地图界面的预设位置处显示至少一个第一图像。
  38. 根据权利要求36所述的设备,其特征在于,还包括:显示屏,所述显示屏耦合到所述处理器,所述处理器还用于控制所述显示屏在所显示的地图界面的所述第一地理位置上显示所述第一标记。
  39. 根据权利要求38所述的设备,其特征在于,所述输入设备还用于,接收用户操作所述第一标记触发的第一指令;
    所述处理器还用于,根据所述第一指令,控制所述显示屏在所显示的地图界面上显 示一内容显示界面,所述内容显示界面中显示有与所述第一地理位置关联的至少一个第一图像。
  40. 根据权利要求39所述的设备,其特征在于,还包括:输出设备,所述输出设备耦合到所述处理器;
    所述输入设备还用于,接收所述用户操作所述内容显示界面触发的第二指令;
    所述处理器还用于根据所述第二指令,控制所述输出设备将所述第一地理位置以及与所述第一地理位置关联的至少一个第一图像发送给网络设备,以使所述网络设备进行至少一个第一图像的分享。
  41. 根据权利要求40所述的设备,其特征在于,所述处理器还用于,将所述第一标记替换为第二标记,所述第二标记用于表征所述第一图像已分享。
  42. 根据权利要求36所述的设备,其特征在于,还包括:输出设备,所述输出设备耦合到所述处理器;
    所述输入设备还用于,接收用户操作预设用户接口触发的第三指令;
    所述处理器还用于,根据所述第三指令,控制所述输出设备将已获取的满足预设分享条件的第一图像与各所述第一图像关联的第一地理位置发送给网络设备,以使所述网络设备进行各所述第一图像的分享。
  43. 根据权利要求36至42任一项所述的设备,其特征在于,所述输入设备还用于,接收服务器发送的第二图像和与所述第二图像关联的第二地理位置;
    所述处理器还用于,在所述地图的所述第二地理位置上添加第三标记,用于表征所述第二地理位置关联有所述第二图像。
  44. 根据权利要求36所述的设备,其特征在于,还包括:输出设备,所述输出设备耦合到所述处理器;
    所述处理器,还用于确定需要启动摄像设备的拍摄功能;
    所述输出设备,用于向所述摄像设备发送第五指令以指示所述摄像设备进行拍摄;
    所述输入设备具体用于,获取所述摄像设备对当前场景进行拍摄得到的第一图像。
  45. 根据权利要求44所述的设备,其特征在于,
    所述输入设备还用于接收传感设备发送的传感数据,所述处理器还用于根据所述传感数据确定需要启动所述摄像设备的拍摄功能;或者
    所述输入设备还用于接收服务器发送的拍摄信息,所述拍摄信息包括待拍摄的地理位置,所述处理器还用于在交通工具所处的地理位置为所述待拍摄的地理位置时,确定 需要启动所述摄像设备的拍摄功能;或者
    所述输入设备还用于接收用户输入的语音信号,所述处理器还用于根据所述语音信号确定需要启动所述摄像设备的拍摄功能;或者
    所述输入设备还用于接收第三图像,所述处理器还用于对所述第三图像进行图像分析,根据图像分析结果确定当前场景为预设场景,则确定需要启动摄像设备的拍摄功能;或者
    所述输入设备还用于接收用户通过硬件设备触发的控制信号,所述处理器还用于根据所述控制信号确定需要启动所述摄像设备的拍摄功能。
  46. 一种图像的处理设备,其特征在于,包括:输入设备、处理器和输出设备;
    所述输入设备,用于接收终端设备发送的第一地理位置;
    所述处理器,耦合到所述输入设备和所述输出设备,用于确定第一图像,所述第一图像是其它终端设备在所述第一地理位置对应的第一地理范围内拍摄的图像;
    所述输出设备,用于向所述终端设备发送所述第一图像以及与所述第一图像关联的第二地理位置,所述第二地理位置为所述第一图像的拍摄位置。
  47. 根据权利要求46所述的设备,其特征在于,
    所述处理器,还用于在所述第一地理位置对应的第二地理范围内,确定满足预设拍摄条件的第三地理位置;
    所述输出设备,还用于向所述终端设备发送拍摄信息,所述拍摄信息包括所述第三地理位置,所述拍摄信息用于指示当交通工具位于所述第三地理位置时,需要启动摄像设备的拍摄功能。
  48. 根据权利要求47所述的设备,其特征在于,
    所述输入设备,还用于接收其它终端设备发送的第二图像以及与所述第二图像关联的第四地理位置,所述第二图像为其它终端设备在所述第一地理位置对应的第二地理范围内拍摄的图像,所述第四地理位置为所述第二图像的拍摄位置;
    所述处理器,还用于根据其它终端设备发送的所述第二图像以及与所述第二图像关联的第四地理位置,确定各所述第四地理位置的被拍摄信息,根据各所述第四地理位置的被拍摄信息,在所述第四地理位置中确定满足预设拍摄条件的第三地理位置。
  49. 一种用于交通工具的图像处理的设备,其特征在于,包括:机载输入设备和机载处理器;
    所述机载输入设备,用于获取当前场景对应的第一图像;
    所述机载处理器,耦合到所述机载输入设备,用于获取所述第一图像对应的第一地理位置,并将所述第一图像与所述第一地理位置进行关联;
    所述机载处理器,还用于在地图的所述第一地理位置上添加第一标记,用于表征所述第一地理位置关联有所述第一图像。
  50. 根据权利要求49所述的设备,其特征在于,所述机载输入设备包括:软件的可编程接口、收发信机、面向设备的车载设备接口、面向用户的车载用户接口中的至少一个。
  51. 根据权利要求49所述的设备,其特征在于,还包括:车载显示屏,
    所述车载显示屏耦合到所述车载处理器,所述车载处理器还用于控制所述车载显示屏以执行上述权利要求2或3所述的方法。
  52. 根据权利要求51所述的设备,其特征在于,还包括:机载输出设备,
    所述机载输出设备,耦合到所述机载处理器,用于执行上述权利要求5或7或10或12所述的方法。
  53. 根据权利要求50所述的设备,其特征在于,所述面向用户的车载用户接口包括以下一个或多个:
    中控台控制按键;
    方向盘控制按键;
    语音输入设备;
    触摸感知设备。
  54. 根据权利要求49至53任一项所述的设备,其特征在于,所述机载处理器还用于执行上述权利要求2至13任一项所述的方法。
  55. 一种用户界面系统,其特征在于,包括:
    显示组件,用于显示地图界面;
    处理器,用于触发所述显示组件在地图界面的第一地理位置上显示第一标记,用于表征第一地理位置关联有第一图像。
  56. 根据权利要求55所述的系统,其特征在于,所述处理器,还用于触发所述显示组件在所述地图界面的预设位置处显示与所述第一地理位置关联的第一图像。
  57. 根据权利要求55所述的系统,其特征在于,所述处理器,还用于触发所述显示组件在所述地图界面的第二地理位置上显示第二标记,所述第二标记用于表征所述第二地理位置关联的第二图像已分享。
  58. 根据权利要求55所述的系统,其特征在于,所述处理器,还用于触发所述显示组件在所述地图界面的第三地理位置上显示第三标记,用于表征所述第三地理位置关联有其它用户分享的第二图像。
  59. 根据权利要求55至58任一项所述的系统,其特征在于,所述处理器,还用于基于用户操作,触发所述显示组件在所述地图界面上显示一内容显示界面,所述内容显示界面上显示有与各所述标记对应的图像。
  60. 根据权利要求59所述的系统,其特征在于,所述处理器,还用于触发所述显示组件在所述内容显示界面上显示用户接口,所述用户接口用于用户触发各种指令。
  61. 一种车载互联网操作系统,其特征在于,包括:
    图像控制单元,控制车载输入设备获取当前场景对应的第一图像;
    关联控制单元,获取所述第一图像对应的第一地理位置,得到在所述第一地理位置上添加有第一标记的地图,用于表征第一地理位置关联有第一图像,其中,所述添加有第一标记的地图是在原始地图的第一地理位置上添加第一标记得到的。
PCT/CN2017/080545 2016-04-21 2017-04-14 图像的处理方法、装置、设备及用户界面系统 WO2017181910A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610251255.4A CN107305561B (zh) 2016-04-21 2016-04-21 图像的处理方法、装置、设备及用户界面系统
CN201610251255.4 2016-04-21

Publications (1)

Publication Number Publication Date
WO2017181910A1 true WO2017181910A1 (zh) 2017-10-26

Family

ID=60115635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080545 WO2017181910A1 (zh) 2016-04-21 2017-04-14 图像的处理方法、装置、设备及用户界面系统

Country Status (3)

Country Link
CN (1) CN107305561B (zh)
TW (1) TW201741630A (zh)
WO (1) WO2017181910A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700658A (zh) * 2019-10-22 2021-04-23 奥迪股份公司 用于车辆的图像分享的系统及相应的方法和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7028608B2 (ja) * 2017-11-06 2022-03-02 トヨタ自動車株式会社 情報処理装置、情報処理方法、及びプログラム
CN110956716A (zh) * 2018-09-27 2020-04-03 上海博泰悦臻网络技术服务有限公司 基于车辆的图像采集方法、传输方法、装置、车辆、系统、介质
CN113627419A (zh) * 2020-05-08 2021-11-09 百度在线网络技术(北京)有限公司 兴趣区域评估方法、装置、设备和介质
CN112328924B (zh) * 2020-10-27 2023-08-01 青岛以萨数据技术有限公司 web端实现图片查看器的方法、电子设备、介质及系统
CN113507614A (zh) * 2021-06-23 2021-10-15 青岛海信移动通信技术股份有限公司 视频的播放进度调节方法及显示设备
CN117041627B (zh) * 2023-09-25 2024-03-19 宁波均联智行科技股份有限公司 Vlog视频生成方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046802A (zh) * 2006-03-31 2007-10-03 马飞涛 一种地理图片搜索方法
CN101867730A (zh) * 2010-06-09 2010-10-20 马明 一种基于用户轨迹的多媒体合成方法
CN102103800A (zh) * 2009-12-17 2011-06-22 富士通天株式会社 导航装置、车载显示系统及地图显示方法
US20130286223A1 (en) * 2012-04-25 2013-10-31 Microsoft Corporation Proximity and connection based photo sharing
CN103700254A (zh) * 2013-12-31 2014-04-02 同济大学 一种基于位置服务的路况分享系统
CN104111930A (zh) * 2013-04-17 2014-10-22 刘红超 一种图像文件处理系统
CN106331383A (zh) * 2016-11-21 2017-01-11 努比亚技术有限公司 一种图像存储方法及移动终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110184953A1 (en) * 2010-01-26 2011-07-28 Dhiraj Joshi On-location recommendation for photo composition
CN102829788A (zh) * 2012-08-27 2012-12-19 北京百度网讯科技有限公司 一种实景导航方法和实景导航装置
CN104904200B (zh) * 2012-09-10 2018-05-15 广稹阿马斯公司 捕捉运动场景的设备、装置和系统
CN105261081A (zh) * 2015-11-05 2016-01-20 浙江吉利汽车研究院有限公司 车辆安全系统工作记录装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046802A (zh) * 2006-03-31 2007-10-03 马飞涛 一种地理图片搜索方法
CN102103800A (zh) * 2009-12-17 2011-06-22 富士通天株式会社 导航装置、车载显示系统及地图显示方法
CN101867730A (zh) * 2010-06-09 2010-10-20 马明 一种基于用户轨迹的多媒体合成方法
US20130286223A1 (en) * 2012-04-25 2013-10-31 Microsoft Corporation Proximity and connection based photo sharing
CN104111930A (zh) * 2013-04-17 2014-10-22 刘红超 一种图像文件处理系统
CN103700254A (zh) * 2013-12-31 2014-04-02 同济大学 一种基于位置服务的路况分享系统
CN106331383A (zh) * 2016-11-21 2017-01-11 努比亚技术有限公司 一种图像存储方法及移动终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700658A (zh) * 2019-10-22 2021-04-23 奥迪股份公司 用于车辆的图像分享的系统及相应的方法和存储介质

Also Published As

Publication number Publication date
CN107305561B (zh) 2021-02-02
CN107305561A (zh) 2017-10-31
TW201741630A (zh) 2017-12-01

Similar Documents

Publication Publication Date Title
WO2017181910A1 (zh) 图像的处理方法、装置、设备及用户界面系统
US11175651B2 (en) Method, device and system for presenting operation information of a mobile platform
RU2597232C1 (ru) Способ предоставления видео в режиме реального времени и устройство для его осуществления, а также сервер и терминальное устройство
US9584694B2 (en) Predetermined-area management system, communication method, and computer program product
WO2017032126A1 (zh) 无人机的拍摄控制方法及装置、电子设备
US10609149B2 (en) Methods and apparatus for capturing data using a marine electronics device
US20130208004A1 (en) Display control device, display control method, and program
JP2014127148A5 (zh)
US10008114B2 (en) Vehicle searching system and method for searching vehicle
JP6732677B2 (ja) 動画収集システム、動画収集装置、および動画収集方法
CN112540739B (zh) 一种投屏方法及系统
CN111125442B (zh) 数据标注方法及装置
CN110457571B (zh) 获取兴趣点信息的方法、装置、设备及存储介质
KR20180006942A (ko) 조작 가이드 방법 및 장치, 전자기기
CN108646280A (zh) 一种定位方法、装置及用户终端
US11647370B2 (en) Mobile information terminal, information presentation system and information presentation method
US11194538B2 (en) Image management system, image management method, and program
KR20160005149A (ko) 차량용 블랙박스를 이용한 교통위반 신고 시스템 및 그 신고 방법
KR20220023745A (ko) 차량 블랙박스와 연동한 교통사고 정보 관리 방법 및 이를 위한 장치
JP2015018421A (ja) 端末装置、投稿情報送信方法、投稿情報送信プログラムおよび投稿情報共有システム
CN112269939A (zh) 自动驾驶的场景搜索方法、装置、终端、服务器及介质
CN110969072B (zh) 模型优化方法、设备及图像分析系统
CN111176338A (zh) 导航方法、电子设备及存储介质
KR20130047439A (ko) 정보 수집 장치 및 그 방법
CN111475233B (zh) 信息获取方法、图形码生成方法以及装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785395

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.02.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17785395

Country of ref document: EP

Kind code of ref document: A1