CN111750891B - Method, computing device, and computer storage medium for information processing - Google Patents

Method, computing device, and computer storage medium for information processing Download PDF

Info

Publication number
CN111750891B
CN111750891B CN202010770548.XA CN202010770548A CN111750891B CN 111750891 B CN111750891 B CN 111750891B CN 202010770548 A CN202010770548 A CN 202010770548A CN 111750891 B CN111750891 B CN 111750891B
Authority
CN
China
Prior art keywords
vehicle
area
predetermined
sub
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010770548.XA
Other languages
Chinese (zh)
Other versions
CN111750891A (en
Inventor
时红仁
朱成龙
韩兆龙
熊正桥
应宜伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qwik Smart Technology Co Ltd
Original Assignee
Shanghai Qwik Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qwik Smart Technology Co Ltd filed Critical Shanghai Qwik Smart Technology Co Ltd
Priority to CN202010770548.XA priority Critical patent/CN111750891B/en
Publication of CN111750891A publication Critical patent/CN111750891A/en
Application granted granted Critical
Publication of CN111750891B publication Critical patent/CN111750891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/02Reservations, e.g. for tickets, services or events
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Mathematical Physics (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a method of information processing, a computing device, and a computer storage medium. The method comprises the following steps: acquiring a request about a predetermined transaction from a user terminal or a vehicle-mounted device; in response to determining that the item attribute belongs to the set of predetermined attributes, determining whether there are free sub-areas in the predetermined area; in response to determining that the predetermined area has an unoccupied sub-area, determining an associated sub-area of the vehicle in the unoccupied sub-area; determining first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response to the user terminal regarding the determination request, the response indicating at least the first navigation information and an identity of the associated sub-area; and in response to determining that the predetermined time has arrived, obtaining a current location of the vehicle for confirming a match of the current location of the vehicle with the geographic location of the associated sub-area. The present disclosure can automatically match the user's needs with respect to the kinds of transaction commodities, the transaction locations, and the city management.

Description

Method for information processing, computing device, and computer storage medium
Technical Field
The present disclosure relates generally to electronic commerce and, in particular, to methods, computing devices, and computer storage media for information processing.
Background
In a traditional information processing scheme related to electronic commerce, a user accesses a webpage of an internet e-commerce platform so as to perform online commodity transaction according to commodity information provided by the e-commerce platform and a link thereof, although the above-mentioned method has the advantage of not being limited by a geographical location of a merchant, the online transaction method based on the commodity information (such as a display image) is generally difficult to provide a chance that the user really experiences commodities, so that a certain return rate is provided. For the traditional transaction information processing method related to the offline brick-and-mortar store, although the real experience of the user on the commodity can be improved, the changed transaction requirements of the user in different regions cannot be met due to the fixed position of the brick-and-mortar store of the merchant. Although the traditional mobile marketplace transaction mode can overcome the defect of fixed position of the physical store, the merchant is difficult to accurately acquire the position information of the corresponding business position in the mobile marketplace, and the merchant cannot automatically match the transaction requirements of more users on the transaction commodity types, the transaction places and the like on line; and also presents a challenge to city management.
Therefore, in the conventional scheme regarding the e-commerce information processing, it is impossible to automatically match the user's needs with respect to the kinds of transaction commodities, the transaction places, and the city management.
Disclosure of Invention
The present disclosure provides a method, computing device, and computer storage medium for information processing capable of automatically matching user's needs with respect to the kinds of transaction commodities, transaction places, and city management.
According to a first aspect of the present disclosure, a method for information processing is provided. The method comprises the following steps: obtaining a request from a user terminal or a vehicle-mounted device for a predetermined transaction, the request indicating at least a commodity attribute associated with a vehicle and an identification of the vehicle, the predetermined transaction being associated with a predetermined time; in response to determining that the item attribute belongs to the set of predetermined attributes, determining whether there are free sub-areas in the predetermined area; in response to determining that the predetermined area has an unoccupied sub-area, determining an associated sub-area of the vehicle in the unoccupied sub-area; determining first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response to the user terminal regarding the determination request, the response indicating at least the first navigation information and an identity of the associated sub-area; and in response to determining that the predetermined time has arrived, obtaining a current location of the vehicle for confirming a match of the current location of the vehicle with the geographic location of the associated sub-area.
According to a second aspect of the present invention, there is also provided an electronic device, the device comprising: a memory configured to store one or more computer programs; and a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of the first aspect of the disclosure.
According to a third aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium. The non-transitory computer readable storage medium has stored thereon machine executable instructions which, when executed, cause a machine to perform the method of the first aspect of the disclosure.
In some embodiments, obtaining the location of the vehicle comprises: acquiring GPS information from a vehicle and detection information of a pose sensor; in response to the fact that the position of the vehicle is determined to be within the range of a longitude threshold and a latitude threshold of an opposite corner point of a preset area, acquiring an environment image acquired by a preset vehicle-mounted camera device of the vehicle; identifying a target object and an associated position of the target object included in the environment image via a neural network model based on the image; determining a calculated position of the vehicle based on the identified target object and the associated position of the target object; and fusing the GPS information, the detection information of the pose sensor and the calculated position of the vehicle so as to determine the current position of the vehicle.
In some embodiments, the method of information processing further comprises: acquiring geographic position information associated with a predetermined area based on the name of the predetermined area; determining longitude thresholds and latitude thresholds of opposite-corner points of a predetermined area based on the geographic position information; and determining longitude information and latitude information of each sub-area included in the predetermined area based on the longitude threshold and the latitude threshold of the opposite corner points and the predetermined number. And determining longitude thresholds and latitude thresholds of opposite corner points of each sub-region based on the longitude information and the latitude information of each sub-region.
In some embodiments, the navigation information of at least one of the predetermined area and the associated sub-area comprises: navigation information on an entrance of the predetermined area and navigation information associating center points of the sub-areas.
In some embodiments, the method of information processing further comprises: in response to determining that the current location of the vehicle matches the geographic location of the associated sub-region, generating an occupancy indicator indicating that the associated sub-region is occupied; based on the occupancy indication, rendering an image associated with the associated sub-region for presentation of an image indicating the predetermined area.
In some embodiments, the method of information processing further comprises: in response to detecting a vehicle at a predetermined location of a predetermined area, determining an identity of the vehicle; determining an associated sub-region of the vehicle based on the identity of the vehicle; acquiring an occupation identifier for indicating that a sub-region in a predetermined region is occupied; and generating second navigation information from the preset position to the associated sub-area based on the associated sub-area and the occupancy identification of the vehicle so as to send the second navigation information to at least one of the vehicle-mounted equipment and the user terminal of the vehicle.
In some embodiments, the method of information processing further comprises: acquiring images and text description information about commodity attributes from a user terminal or vehicle-mounted equipment; in response to determining that the request for the vehicle has been confirmed, generating a product description image associated with the associated subregion of the vehicle based on the image and the textual description information; based on the article description image, presentation information regarding the predetermined transaction is generated.
In some embodiments, the method of information processing further comprises: in response to determining that the predetermined time has arrived and confirming that the current location of the vehicle is not within the diagonal longitude threshold and the diagonal latitude threshold of the sub-area associated with the vehicle, detecting whether transaction information associated with the vehicle exists; in response to detecting the transaction information associated with the vehicle, prompt information is generated.
In some embodiments, the predetermined set of attributes is determined via: acquiring historical orders of which the distance from a distribution place to a preset area is smaller than or equal to a preset distance in a preset time interval; clustering based on the category of the goods indicated by the historical orders and the purchase quantity associated with the category of the goods; and determining a predetermined set of attributes based on the type of merchandise for which the purchase quantity exceeds a predetermined threshold and for which a predetermined condition is met.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
Fig. 1 shows a schematic diagram of a system for a method of information processing according to an embodiment of the present disclosure.
Fig. 2 shows a flow diagram of a method for information processing according to an embodiment of the present disclosure.
Fig. 3 schematically shows a schematic diagram of a method for determining geographical location information of a predetermined area according to an embodiment of the present disclosure.
Fig. 4 schematically shows an architectural diagram of a neural network model according to one embodiment of the present disclosure.
FIG. 5 shows a flow chart of a method for obtaining a current position of a vehicle according to an embodiment of the present disclosure.
Fig. 6 shows a flow chart of a method for presenting an image of a predetermined area according to an embodiment of the present disclosure.
FIG. 7 schematically illustrates a block diagram of an electronic device suitable for use to implement embodiments of the present disclosure.
Like or corresponding reference characters indicate like or corresponding parts throughout the several views.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, in the above conventional information processing scheme, the internet e-commerce platform, the physical store or the mobile marketplace cannot automatically match the diversified needs of the user about the transaction commodity types and the transaction locations, and simultaneously meet the real commodity experience of the user and the city management needs.
To address, at least in part, one or more of the above issues and other potential issues, an example embodiment of the present disclosure proposes a scheme for information processing. The scheme comprises the following steps: obtaining a request from a user terminal or a vehicle-mounted device for a predetermined transaction, the request indicating at least a commodity attribute associated with a vehicle and an identification of the vehicle, the predetermined transaction being associated with a predetermined time; in response to determining that the item attribute belongs to the set of predetermined attributes, determining whether there are free sub-areas in the predetermined area; in response to determining that the predetermined area has an unoccupied sub-area, determining an associated sub-area of the vehicle in the unoccupied sub-area; determining first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response to the user terminal regarding the determination request, the response indicating at least the first navigation information and an identity of the associated sub-area; and in response to determining that the predetermined time has arrived, obtaining a current location of the vehicle for confirming a match of the current location of the vehicle with the geographic location of the associated sub-area.
In the scheme, the commodity attribute associated with the vehicle is determined to belong to the preset attribute set, and a free sub-area exists in the preset area; determining an associated sub-area of the vehicle in the free sub-area; and including the determined navigation information about at least one of the predetermined area and the associated sub-area in the confirmation response sent to the user, and automatically determining the matching of the current position of the vehicle with the geographic position of the associated sub-area when a predetermined time arrives, the present disclosure enables matching of the goods involved in the predetermined area (mobile mart) transaction with a predetermined set of goods attributes; and it is possible to achieve an accurate navigation of the vehicle to the fair position even in the case where the predetermined area (mobile fair) and the associated sub-area (business space) have no physical identification; in addition, it is possible to automatically confirm whether the vehicle has reached a predetermined position of business when the transaction time arrives. Therefore, the method and the system can automatically match diversified demands of the user on the transaction commodity types and the transaction places, and simultaneously meet the real commodity experience and city management demands of the user.
Fig. 1 shows a schematic diagram of a system 100 for a method of information processing according to an embodiment of the present disclosure. As shown in fig. 1, system 100 includes a plurality of vehicles 110 (other vehicles not shown), user terminals 120 associated with users of vehicles 110, computing devices 130, base stations 150, network 170, predetermined areas 160. In some embodiments, multiple vehicles 110 travel in different areas, for example. Vehicle 110, user terminal 120 of the user, computing device 160 may interact with data via base station 150, network 170, for example.
Regarding the computing device 130, when it is determined that the article attribute associated with the vehicle belongs to the predetermined attribute set and that there is an idle sub-area in the predetermined area, determining an associated sub-area of the vehicle in the idle sub-area; determining first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response regarding the determination request to at least one of the user terminal and the in-vehicle device, the response indicating at least an identification of the first navigation information and the associated sub-area; and upon determining that the predetermined time has arrived, confirming a match of the current location of the vehicle with the geographic location of the associated sub-area. Computing device 130 may have one or more processing units, including special purpose processing units such as GPUs, FPGAs, ASICs, and the like, as well as general purpose processing units such as CPUs. In addition, one or more virtual machines may also be running on each computing device. As shown in fig. 1, computing device 130 includes, for example and without limitation: the vehicle navigation system comprises a data acquisition unit 132, an idle sub-area determination unit 134, a vehicle-associated sub-area determination unit 136, a navigation information determination unit 138, a response transmission unit 140 and a vehicle current position matching unit 142.
With regard to the data acquisition unit 132, which is used, for example, to request information indicative of at least the merchandise attributes associated with the vehicle and the identification of the vehicle, the predetermined transaction is associated with a predetermined time.
The free sub-area determination unit 134 is configured to determine whether a free sub-area exists in the predetermined area, for example, if it is determined that the article attribute belongs to the predetermined attribute set.
As regards the vehicle-associated sub-area determination unit 136, it is used, for example, to determine a sub-area associated with the vehicle in an idle sub-area if it is determined that the predetermined area has an idle sub-area.
The navigation information determining unit 138 is for determining first navigation information regarding at least one of the predetermined area and the associated sub-area, for example.
A transaction response sending unit 140 is used, for example, to send a response to at least one of the user terminal and the in-vehicle device regarding the determination request, the response indicating at least the first navigation information and the identification of the associated sub-area.
The vehicle current position matching unit 142 is configured to, for example, acquire the current position of the vehicle if it is determined that the predetermined time has arrived, so as to confirm the matching of the current position of the vehicle with the geographic position of the associated sub-area.
As for the vehicle 110, it includes at least: vehicle-mounted devices (e.g., car machines), vehicle-mounted data aware devices, vehicle-mounted T-BOX, vehicle-mounted displays, etc. The vehicle-mounted data sensing equipment is used for sensing data of the vehicle and data of the external environment where the vehicle is located in real time. The vehicle-mounted data sensing device at least comprises a plurality of vehicle-mounted camera devices. The vehicle 110 and the user terminal 120 may interact and share data through wireless communication means such as Wi-Fi, bluetooth, cellular, NFC, and the like.
The vehicle-mounted camera device can acquire video images or pictures of the environment outside the vehicle. The in-vehicle imaging apparatus includes, for example: the device comprises a vehicle front camera, a vehicle rear camera, a vehicle roof camera and the like.
The in-vehicle device (e.g., a car machine) may send the sequence of captured environmental images and the vehicle GPS information and the monitoring information of the pose sensor to the computing device 130 for the computing device 130 to determine the current position of the vehicle based on the fused GPS information, the detection information of the pose sensor, and the computed position of the vehicle computed based on the environmental images. The onboard device of the vehicle 110 may also obtain navigation information sent via the vehicle from the computing device 130 regarding at least one of the predetermined area and the associated sub-area to display the navigation information at the onboard display, and navigate the vehicle 110 to the associated sub-area of the vehicle (e.g., a business position of a fair) determined by the computing device 130 based on the navigation information.
As to the user terminal 120, it is, for example, but not limited to, a mobile phone. The user terminal 120 may directly interact with the vehicle-mounted T-BOX, or may interact with the computing device 130 via the base station 150 and the network 170. In some embodiments, the user terminal 120 sends a request for a predetermined transaction (the request being indicative of at least the merchandise attributes associated with the vehicle and the identification of the vehicle) to the computing device 130, and receives a response sent by the computing device 130 to determine the request (the response being indicative of at least the identification of the navigation information association sub-area). The user terminal 120 may also be a tablet computer, a mobile phone, a wearable device, etc. For example, the user terminal 120 may establish an association with the vehicle 110 by detecting a predetermined action (e.g., shake-shake) on the user terminal 120.
As for the predetermined area 160, it is, for example, the determined mobile bazaar area. A mobile bazaar area is for example a temporarily closed street, an underground parking lot or inside a cell, etc. For example, the predetermined area 160 is a street near a community where the user lives, for example, vehicles are prohibited from passing at a predetermined time (e.g., friday night 6 o 'clock and 9 o' clock), and become a mobile marketplace consisting of mobile vehicles on which the user can obtain information on goods sold or displayed through the APP of the associated smart community. When the user is interested in the commodities, the user can go to a preset area to experience and purchase. The sub-area 162 comprised by the predetermined area is for example a business area for the confirmed vehicle to stay for the duration of the predetermined transaction. The predetermined area 160 may include a plurality of sub-areas 162. Each sub-area 162 would be associated with a validated vehicle 110. The sub-area associated with the vehicle has not been confirmed as a vacant sub-area.
A method 200 for information processing according to an embodiment of the present disclosure will be described below in conjunction with fig. 2. Fig. 2 shows a flow diagram of a method 200 for information processing according to an embodiment of the present disclosure. It should be understood that the method 200 may be performed, for example, at the electronic device 700 depicted in fig. 7. May also be executed at the computing device 130 depicted in fig. 1. It should be understood that method 200 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At step 202, the computing device 130 obtains a request from a user terminal or in-vehicle device for a predetermined transaction, the request indicating at least an attribute of a good associated with a vehicle and an identification of the vehicle, the predetermined transaction associated with a predetermined time.
The request for the predetermined transaction may be transmitted by the user to the computing device 130 via the user terminal, or may be transmitted to the computing device 130 using an in-vehicle device. With respect to the vehicle-associated merchandise attribute, this is, for example, the category of merchandise that the vehicle is intended to sell or display on-site in a predetermined area.
At step 204, the computing device 130 determines whether the item attribute belongs to a predetermined set of attributes.
If the computing device 130 determines that the item attribute belongs to the predetermined set of attributes, at step 206, it is determined whether there are free sub-areas for the predetermined area. With respect to the predetermined set of attributes, it is determined, for example, by the computing device 130 based on historical consumption records of users in the vicinity of the predetermined area. For example, the computing device 130 may obtain historical orders for which the delivery location is less than or equal to a predetermined distance from the predetermined area within a predetermined time interval; clustering based on the category of the goods indicated by the historical orders and the purchase quantity associated with the category of the goods; and determining a predetermined attribute set based on the commodity category of which the purchase quantity exceeds a predetermined threshold and meets a predetermined condition. The predetermined condition is, for example, that the article is suitable for mobile sales or the like. This makes it possible to make the sales items of vehicles recruited in the predetermined area more match the purchase demand of the user in the vicinity of the predetermined area.
If the computing device 130 determines that the predetermined area has an unoccupied sub-area, at step 208, the associated sub-area of the vehicle is determined in the unoccupied sub-area.
Regarding the manner of determining the associated sub-areas of the vehicles, it is determined, for example, according to the commodity attributes based on the vehicles, so that the associated sub-areas of the vehicles with the same attributes are adjacent to each other, thereby facilitating the convenience of commodity experience and selection for users. For example, the computing device 130 obtains an identification of an idle sub-region and an identification of a determined vehicle-associated sub-region and its associated merchandise attributes, and in response to determining that the associated merchandise attributes of the current vehicle are the same as or similar to the associated merchandise attributes of the one or more vehicle-associated sub-regions determined by the determined vehicle-associated sub-region, determines the idle sub-region that is the closest to the identification of the one or more vehicle-associated sub-regions as the current vehicle-associated sub-region. The identification of the sub-regions is for example a sequence number, the proximity of which indicates the proximity of the locations.
At step 210, the computing device 130 determines first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response to at least one of the user terminal and the in-vehicle device regarding the determination request, the response indicating at least an identification of the first navigation information and the associated sub-area.
Ways of determining navigation information for the predetermined area and the associated sub-area include, for example: the computing device 130 obtains geographic location information associated with the predetermined area based on the name of the predetermined area; determining longitude and latitude thresholds of opposite-angle points of a predetermined area based on the geographical position information; and determining longitude information and latitude information of each sub-area included in the predetermined area based on the longitude threshold and the latitude threshold of the diagonal point and the predetermined number. And determining longitude thresholds and latitude thresholds of opposite corner points of each sub-region based on the longitude information and the latitude information of each sub-region.
A method for determining geographical location information for a predetermined area 300 is described below in conjunction with fig. 3. Fig. 3 schematically shows a schematic diagram of a method for determining geographical location information of a predetermined area 300 according to an embodiment of the disclosure. As shown in fig. 3, a rectangular box ABCD schematically shows an electronic fence (i.e., a virtual bazaar boundary) of the predetermined area 300. The predetermined area 300 includes, for example, an inlet 310 and an outlet 320. 330 for example indicates the current vehicle position E. The abscissa corresponding to the diagonal points a and D of the predetermined area 300 is a minimum latitude value, the abscissa corresponding to the diagonal points B and C is a maximum latitude value, the ordinate corresponding to the diagonal points a and B is a maximum longitude value, and the ordinate corresponding to the diagonal points C and D is a minimum longitude value. Similarly, the associated longitude threshold and latitude threshold may be determined for a plurality of sub-areas comprised by the predetermined area, respectively. The computing device 130 may determine the longitude information and the latitude information of the entrance 310 of the predetermined area as the navigation information of the predetermined area, and determine the longitude information and the latitude information of the center point of the associated sub-area as the navigation information of the associated sub-area, for transmission to the user terminal of the associated vehicle or the in-vehicle device.
At step 212, the computing device 130 determines whether a predetermined time has arrived.
If the computing device 130 determines that the predetermined time has arrived, at step 214, the current location of the vehicle is obtained for confirming a match of the current location of the vehicle with the geographic location of the associated sub-region.
The computing device 130 may confirm whether the vehicle has entered the electronic fence of the predetermined area 300 by confirming whether the current location 330 of the vehicle is between the longitude and latitude thresholds of the diagonal points (A, B, C, D) of the electronic fence of the predetermined area 300, and confirm the matching of the current location of the vehicle to the geographic location of the associated sub-area by confirming whether the current location 330 of the vehicle is between the longitude and latitude thresholds of the diagonal points of the associated sub-area, and whether the angle of the vehicle coincides with the rectangular box of the associated sub-area (e.g., the relative angle is 180 degrees). The following schematically shows code for enabling the confirmation of the matching of the current position of the vehicle with the geographical position of the associated sub-area.
“Package
com.pateonavi.naviapp.imteam.business.team;
public class TextCode {
/**
*
Target point latitude of @ param lat
Longitude of @ param lng target point
Minimum latitude of diagonal point of @ paramminLat
Maximum latitude of diagonal point of @ param maxLat
Minimum longitude of diagonal point of @ param minLng
Maximum longitude of diagonal point of @ param maxLng
* @return
*/
public static boolean isInRectangleArea(double lat, double lng, double minLat, double maxLat,
double minLng, double maxLng) {
if (isinRange (lat, minLat, maxLat)) {// determine whether it is within the range of latitude
if (minLng * maxLng > 0) {
if (isInRange (ng, minLng, maxLng)) {// is in the longitude range
return true;
} else {
return false;
}
} else {
if (Math.abs(minLng) + Math.abs(maxLng) < 180) {
if (isInRange(lng, minLng, maxLng)) {
return true;
} else {
return false;
}
} else {
double left = Math.max(minLng, maxLng);
double right = Math.min(minLng, maxLng);
if (isInRange(lng, left, 180) || isInRange(lng, right, -180)) {
return true;
} else {
return false;
}
}
}
} else {
return false;
}
}
/**
*
At param point target point
Diagonal point of @ param left
Diagonal point of @ param right
* @return
*/
public static boolean isInRange(double point, double left, double right) {
if (point > Math.min(left, right) && point < Math.max(left, right)) {
return true;
} else {
return false;
}
}
}”
The manner of acquiring the current position of the vehicle may include various ways, for example, the computing device 130 acquires the vehicle and GPS position information and detection information of the pose sensor, and confirms the current position of the vehicle based on the acquired detection information of the GPS position information pose sensor. In some embodiments, the computing device 130 may also determine the current position of the vehicle by fusing the GPS information, the detection information of the pose sensor, and the calculated position of the vehicle after confirming that the vehicle reaches the entrance of the predetermined area. The method 500 for obtaining the current position of the vehicle will be described with reference to fig. 5, and will not be described herein.
In the scheme, the commodity attribute associated with the vehicle is determined to belong to the preset attribute set, and an idle sub-area exists in the preset area; determining an associated sub-area of the vehicle in the free sub-area; the method and the device can automatically match diversified demands of the user on transaction commodity types and transaction places and simultaneously meet real commodity experience and city management demands of the user.
In some embodiments, the operator of the mobile marketplace may set a predetermined area on the road approved by the management department, and issue location information of the predetermined area and a predetermined time of a predetermined transaction through the computing device 130 to facilitate traffic management of the relevant road.
In some embodiments, the method 200 further includes a method for presenting predetermined transaction merchandise information. For example, the computing device 130 may obtain image and text description information about the attributes of the goods from the user terminal or the in-vehicle device; if the request of the vehicle is confirmed, generating a commodity description image associated with the associated subregion of the vehicle based on the image and the text description information; and generating presentation information regarding the predetermined transaction based on the article description image. Thus, online and offline presentations of merchandise information associated with each position may be generated. The computing device 130 may publish the merchandise information to the smart community. The product description image may be a composite image of a photograph and a product caption, or a composite image of a moving image and a product caption.
In some embodiments, the computing device 130 may also count the number, type, or products of the transaction associated with each sub-area to determine the demand for the goods for that area.
In some embodiments, the method 200 further includes a method for obtaining a current location of the vehicle. And a method 500 for obtaining a current position of the vehicle is described in conjunction with fig. 4 and 5. Fig. 5 shows a flowchart of a method 500 for obtaining a position of a vehicle according to an embodiment of the present disclosure. It should be understood that method 500 may be performed, for example, at electronic device 700 depicted in fig. 7. May also be executed at the computing device 130 depicted in fig. 1. It should be understood that method 500 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At step 502, the computing device 130 acquires GPS information from the vehicle and detection information of the pose sensor.
At step 504, the computing device 130 determines whether the location of the vehicle is within longitude and latitude thresholds of diagonal points of the predetermined area.
If the computing device 130 determines that the location of the vehicle is within the longitude and latitude thresholds of the diagonal points of the predetermined area, at step 506, an environmental image captured by a predetermined onboard camera of the vehicle is acquired.
At step 508, the computing device 130 identifies the target object and the associated location of the target object included in the environmental image based on the environmental image via a neural network model, the neural network model being trained via the multi-sample.
As for the neural network model, it is constructed based on, for example, an SSD algorithm (Single Shot multitox Detector). The architecture of the neural network model 400 is described below in conjunction with FIG. 4. Fig. 4 schematically shows an architectural schematic of a neural network model 400 according to one embodiment of the present disclosure.
The input to the neural network model 400 is, for example, an environmental image of a predetermined area at the current location of the vehicle, which is captured by a predetermined vehicle-mounted camera of the vehicle, and the size of the environmental image is, for example, 300 × 3.
With respect to the output of the neural network model 400, it is, for example, the environment image that includes the object target and the associated location with the target object.
Regarding the network structure of the neural network model 400, it includes, for example: a base convolutional layer, an auxiliary convolutional layer and a prediction convolutional layer connected in sequence. The base convolutional layers (e.g., layers indicated as 402, 404, 406, 408 to 410 in fig. 4) are used to extract the feature map, for example, in a VGG16 network structure or in a ResNet network structure. Taking the VGG16 network structure as an example, the VCG 16 network includes a convolutional layer and a fully-connected layer, and the fully-connected layer is used for classification, so the fully-connected layer FC6 (i.e., the sixth fully-connected layer FC 6) of the VGG16 is replaced with the convolutional layer Conv6 (i.e., the sixth convolutional layer Conv 6) Conv6 of the 3x 3. The full-connection layer FC7 (i.e., the seventh full-connection layer FC 7) was replaced with a convolution layer Conv7 of 1x1 (i.e., a seventh convolution layer Conv 7). The step size of the pooling layer Pool5 (i.e., the fifth pooling layer Pool 5) is changed to 3x3 of 1. The neural network model 400 adds, for example, auxiliary convolutional layers, e.g., 5 convolutional layers, after the base convolutional layer VGG16 for feature extraction of the bounding box. The auxiliary convolution layer is added with a prediction convolution layer, and the prediction convolution layer is used for obtaining a prediction result comprising the object target and the associated position of the target object based on the characteristic diagram extracted by the network.
The manner in which the feature map extracted via VGG16 is added to the auxiliary convolutional layer for the multi-scale extraction network may be implemented in a variety of ways, for example, using an add _ extra (cft, i, batch _ norm = False) function. Implementation code of an add _ extra (cft, i, batch _ norm = False) function is schematically shown below.
“layers=[]
in channels= i
for k, v in enumerate (cfg):
if in channels != “S”:
if v= = “S”:
layers +=
[nn. Conv2d(in channels. cfg[k+1].kernal_size)]”
The framework of the network structure of the neural network model 400 is, for example, that an environment image about a predetermined region with a size of 300 × 3 is input into the VGG16 network, subjected to convolution processing indicated by 402, 404 to 406, and after a convolution layer Conv4 indicated by 408 (i.e., a fourth convolution layer Conv 4), a feature map is, for example, 38 × 512, then a feature map is, for example, 19 × 1024 obtained through the pooling layer indicated by 410 and the convolution layer Conv6 indicated by 412, and a feature map is, for example, 19 × 1024 obtained through convolution operation Conv7 of 1 × 1024 indicated by 414. Then, through the convolution operation Conv8 (i.e., the eighth convolution layer Conv 8) indicated by 416, a characteristic diagram of, for example, 10 × 512 is obtained. Then, via the convolution operation Conv9 indicated by 418 (i.e., the ninth convolution layer Conv 9), the resultant signature graph is, for example, 5 × 256. Then, the convolution operation Conv10 (i.e., the tenth convolution layer Conv 10) indicated at 420 is performed to obtain a characteristic diagram, for example, 3 × 256. Then, via a convolution operation Conv11 indicated at 422 (i.e., the eleventh convolution layer Conv 11), the resulting signature graph is, for example, 1 × 256.
Preselection boxes are placed on the 6 feature maps of different sizes described above via the neural network model 400, smaller preselection boxes are placed on the shallow feature maps, and larger preselection boxes are placed on the deep feature maps for detecting smaller and larger target objects, respectively. The parameters of the pre-selected box include scale and aspect ratio. And matching the set preselection frame with a real frame in the label, and screening out positive and negative samples based on the overlapping situation to obtain the true value of the category and the offset of the target object. For example, the neural network model 400 generates 4 pre-selection boxes for each pixel on the Conv4 derived feature map indicated at 408, 6 pre-selection boxes for each pixel in the Conv7, Conv8, and Conv9 derived feature maps, and 4 pre-selection boxes for each pixel in the Conv10 and Conv11 feature maps. The neural network model 400 performs the non-maximum suppression operation 424 after performing the progressive classification and regression for each generated pre-selected frame after each layer convolution operation of Conv4, Conv7, Conv8, Conv9, Conv10, and Conv11, and finally obtains a prediction result including the target of the object and the position of the frame (i.e., the associated position of the target object).
Loss functions for the neural network model 400, which include, for example, position regression loss and confidence loss. The loss function of the neural network model 400 is described below in connection with equation (1).
Figure DEST_PATH_IMAGE001
(1)
In equation (1) above, the overall loss function is a weighted sum of the classified and regressed errors.
Figure DEST_PATH_IMAGE002
The adjustment weights representing the foreground loss and the background loss are set to 1, for example.
Figure DEST_PATH_IMAGE003
Representing the category confidence prediction.
Figure DEST_PATH_IMAGE004
And representing the position predicted value of the corresponding boundary box of the prior box.
Figure DEST_PATH_IMAGE005
Represents the number (positive sample number) of real target boxes (ground route boxes) matching to the default boxes (default boxes).
Figure DEST_PATH_IMAGE006
Representing the real target box.
Figure DEST_PATH_IMAGE007
Represents a confidence Loss, indicating the sum of the classification Loss of the foreground and the classification Loss of the background, which is for example Softmax Loss.
Figure DEST_PATH_IMAGE008
Representing the positional regression loss, indicating the regression loss of the positional coordinates of all the a priori anchor boxes (anchors) used for foreground classification, for example using the smoothed L1 loss function. The following description is made in conjunction with equations (2) and (3)
Figure DEST_PATH_IMAGE009
The calculation method of (2).
Figure DEST_PATH_IMAGE010
(2)
Figure DEST_PATH_IMAGE011
(3)
In the above-mentioned formulas (2) and (3),
Figure DEST_PATH_IMAGE012
represents the classification value of the ith anchor to the jth real target box (ground route). p represents the category of the ground truth.
Figure DEST_PATH_IMAGE013
Representing the sum of classification losses for positive samples.
Figure DEST_PATH_IMAGE014
Representing the sum of classification losses for negative examples.
At step 510, the computing device 130 determines a calculated position of the vehicle based on the identified target object and the associated position of the target object.
In some embodiments, the computing apparatus 130 may determine the calculated position of the vehicle based on a predetermined onboard camera height, camera angle, camera range, associated position of the target object, and a predetermined pixel distance calibration of the onboard camera. Among them, it is necessary to convert the pixel coordinates of the target object in the environment image into the position of the vehicle. The manner in which the pixel coordinates of the target object are converted into the position of the vehicle requires a predetermined pixel distance calibration of a predetermined in-vehicle imaging device to be performed in advance. For example, after a predetermined on-vehicle camera is fixed at a predetermined photographing position on the vehicle 110, a near point and a far point are selected on the ground in front of the on-vehicle camera (i.e., in the vehicle traveling direction), and then the actual distances from the near point and the far point, respectively, to the reference point are measured. And then calculating the actual distance of the position corresponding to each row of pixels in the image picture acquired by the vehicle-mounted camera device based on the height of the vehicle-mounted camera device from the ground, and further determining the preset pixel distance calibration of the vehicle-mounted camera device. And then, respectively placing markers at a plurality of points with different actual positions selected on the ground in front of the vehicle-mounted camera device so as to verify whether the determined preset pixel distance calibration of the vehicle-mounted camera device is accurate.
At step 512, the computing device 130 fuses the GPS information, the detection information of the pose sensor, and the calculated position of the vehicle to determine the current position of the vehicle.
The GPS signal of the vehicle has a large error, is not updated frequently, and does not have a GPS signal in some areas (e.g., underground parking lot, etc.), and therefore it is difficult to perform high-precision positioning of the current position of the vehicle based on only the GPS signal of the vehicle. The positioning accuracy of the position and orientation sensors such as gyroscopes and accelerometers mounted on the vehicle is not high, and the error may be dispersed over time. Therefore, the current position of the vehicle can be determined more accurately by fusing the GPS signal of the vehicle, the detection signal of the pose sensor, and the calculated position of the vehicle determined based on the environmental image captured by the predetermined on-vehicle camera.
For example, the computing apparatus 130 performs transformation of feature extraction on the GPS signal, the detection signal of the pose sensor, and the calculated position of the vehicle calculated based on the environmental image, extracting a feature vector representing the detection data; then, carrying out pattern recognition processing on the feature vectors so as to generate description data of each sensor about the target; associating the description data of each sensor about the target based on the same target; the description data of the sensors of each identical target are synthesized using a fusion algorithm (e.g., a bayesian algorithm, or kalman filtering, etc.) to generate synthesized description data about the target, based on which the current position of the vehicle is determined.
In the above scheme, by adopting the scheme, the present disclosure can accurately determine the position of the vehicle.
In some embodiments, the method 200 further comprises: if the computing device 130 sends information for virtual reality (AR) navigation to guide the vehicle to park in the associated sub-area after determining that the vehicle enters the electronic fence via a predetermined location (e.g., entrance) of the predetermined area (electronic fence). Therefore, the visual auxiliary positioning can be carried out according to the vehicle-mounted camera, and the guidance of (AR) navigation is carried out according to the electronic fence map.
In some embodiments, method 200 also includes method 600 for presenting an image of a predetermined area. FIG. 6 shows a flow diagram of a method 600 for presenting an image of a predetermined area in accordance with an embodiment of the present disclosure. It should be understood that method 600 may be performed, for example, at electronic device 700 depicted in fig. 7. May also be executed at the computing device 130 depicted in fig. 1. It should be understood that method 600 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At step 602, if the computing device 130 determines that the vehicle location matches the geographic location of the associated sub-region, an identifier is generated indicating that the associated sub-region is occupied.
At step 604, the computing device 130 renders an image associated with the associated sub-region for presentation of an image indicative of the predetermined region based on the identification. Therefore, the positioning condition of each vehicle and the related commodities in the mart can be displayed in real time.
In some embodiments, if the computing device 130 detects an identification that the associated sub-region is already occupied, the relevant link to the composite image of the item associated with the associated sub-region is activated for receiving an order for the associated sub-region.
At step 606, the computing device 130 determines whether the predetermined time has arrived and confirms that the current location of the vehicle is not within the diagonal longitude threshold and the diagonal latitude threshold of the sub-region associated with the vehicle.
At step 608, if the computing device 130 determines that the predetermined time has arrived and confirms that the current location of the vehicle is not within the diagonal longitude threshold and the diagonal latitude threshold of the sub-region associated with the vehicle, it detects whether there is transaction information associated with the vehicle.
If the computing device 130 detects that there is transaction information associated with the vehicle, at step 610, a prompt is generated.
By adopting the means, the execution of the transaction after the vehicle is parked in place according to the associated sub-area can be ensured, so that the condition that the vehicle is not parked according to the position of the associated sub-area and the management order of the market is influenced is avoided.
In some embodiments, the method 600 further comprises a method for navigating to a vehicle-associated sub-area. For example, if the computing device determines that a vehicle is detected at a predetermined location of the predetermined area, an identification of the vehicle is determined; determining an associated sub-region of the vehicle based on the identity of the vehicle; acquiring an occupation identifier for indicating that a sub-region in a predetermined region is occupied; and generating second navigation information from the preset position to the associated sub-area based on the associated sub-area and the occupation identification of the vehicle so as to send the second navigation information to at least one of the vehicle-mounted equipment and the user terminal of the vehicle. Thus, the present disclosure may provide navigation information indicating the best path for the current vehicle to reach its corresponding associated sub-area based on the sub-area conditions that are currently in place or occupied.
FIG. 7, executed at an electronic device 700 depicted in FIG. 7, schematically illustrates a block diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure. The device 700 may be a device for implementing the methods 200, 500, 600 shown in fig. 2, 5, 6. As shown in fig. 7, device 700 includes a Central Processing Unit (CPU) 701 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 702 or computer program instructions loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The CPU 701, ROM 702, and RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, a processing unit 701 performs the respective methods and processes described above, e.g. performing the methods 200, 500, 600. For example, in some embodiments, the methods 200, 500, 600 may be implemented as a computer software program stored on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM703 and executed by the CPU 701, one or more of the operations of the methods 200, 500, 600 described above may be performed. Alternatively, in other embodiments, the CPU 701 may be configured in any other suitable manner (e.g., by way of firmware) to perform one or more of the acts of the methods 200, 500, 600.
It is further noted that the present disclosure may be methods, apparatus, systems and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor in a voice interaction device, a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
The above are merely alternative embodiments of the present disclosure and are not intended to limit the present disclosure, which may be modified and varied by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (11)

1. A method of information processing, comprising:
obtaining a request from a user terminal or a vehicle-mounted device regarding a predetermined transaction, the request indicating at least a merchandise attribute associated with a vehicle and an identification of the vehicle, the predetermined transaction being associated with a predetermined time;
in response to determining that the article attribute belongs to a predetermined attribute set, determining whether a predetermined area has a vacant sub-area;
in response to determining that a predetermined area has an unoccupied sub-area, determining an associated sub-area of the vehicle in the unoccupied sub-area;
determining first navigation information regarding at least one of the predetermined area and the associated sub-area for sending a response regarding determining the request to at least one of the user terminal and the in-vehicle device, the response indicating at least an identification of the first navigation information and the associated sub-area; and
in response to determining that the predetermined time has arrived, obtaining a current location of the vehicle for confirming a match of the current location of the vehicle with the geographic location of the associated sub-area.
2. The method of claim 1, wherein obtaining the location of the vehicle comprises:
acquiring GPS information and detection information of a pose sensor from the vehicle;
in response to determining that the position of the vehicle is within a longitude threshold and a latitude threshold of a diagonal point of the predetermined area, acquiring an environment image acquired by a predetermined vehicle-mounted camera of the vehicle;
identifying a target object and an associated position of the target object included in the environment image based on the environment image via a neural network model, the neural network model being trained via multiple samples;
determining a calculated position of the vehicle based on the identified target object and the associated position of the target object; and
fusing the GPS information, the detection information of the pose sensor and the calculated position of the vehicle so as to determine the current position of the vehicle.
3. The method of claim 1, further comprising:
acquiring geographic position information associated with the predetermined area based on the name of the predetermined area;
determining longitude and latitude thresholds of opposite-corner points of the predetermined area based on the geographic position information;
determining longitude information and latitude information of each sub-area included in the predetermined area based on the longitude threshold, the latitude threshold and the predetermined number of the diagonal points; and
and determining longitude thresholds and latitude thresholds of opposite corner points of each sub-region based on the longitude information and the latitude information of each sub-region.
4. The method of claim 1, wherein the navigation information of at least one of the predetermined area and the associated sub-area comprises: navigation information on an entrance of the predetermined area and navigation information on a center point of the associated sub-area.
5. The method of claim 1, further comprising:
in response to determining that the current location of the vehicle matches the geographic location of the associated sub-region, generating an occupancy indicator indicating that the associated sub-region is occupied; and
rendering an image associated with the associated sub-region for presentation of an image indicative of the predetermined region based on the occupancy identification.
6. The method of claim 5, further comprising:
in response to detecting a vehicle at a predetermined location of the predetermined area, determining an identity of the vehicle;
determining an associated sub-region of the vehicle based on the identity of the vehicle;
acquiring an occupation identifier for indicating that a sub-region in the predetermined region is occupied; and
and generating second navigation information from the preset position to the associated sub-area based on the associated sub-area of the vehicle and the occupation identification so as to send the second navigation information to at least one of the vehicle-mounted equipment of the vehicle and the user terminal.
7. The method of claim 1, further comprising:
acquiring images and text description information about the commodity attributes from the user terminal or the vehicle-mounted equipment;
in response to determining that the request for the vehicle has been confirmed, generating a merchandise description image associated with an associated subregion of the vehicle based on the image and the textual description information; and
generating presentation information regarding the predetermined transaction based on the item description image.
8. The method of claim 1, further comprising:
in response to determining that a predetermined time has been reached and confirming that the current location of the vehicle is not within a diagonal longitude threshold and a diagonal latitude threshold of an associated sub-region with the vehicle, detecting whether transaction information associated with the vehicle is present; and
in response to detecting transaction information associated with the vehicle, generating a prompt message.
9. The method of claim 1, wherein a predetermined set of attributes is determined via:
acquiring historical orders of which the distance from a distribution place to the preset area is smaller than or equal to a preset distance in a preset time interval;
clustering based on the category of the goods indicated by the historical order and a purchase quantity associated with the category of the goods; and
the predetermined set of attributes is determined based on the type of goods for which the number of purchases exceeds a predetermined threshold and for which a predetermined condition is met.
10. An electronic device, comprising:
a memory configured to store one or more computer programs; and
a processor coupled to the memory and configured to execute the one or more programs to cause the apparatus to perform the method of any of claims 1-9.
11. A non-transitory computer readable storage medium having stored thereon machine executable instructions which, when executed, cause a machine to perform the steps of the method of any of claims 1-9.
CN202010770548.XA 2020-08-04 2020-08-04 Method, computing device, and computer storage medium for information processing Active CN111750891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010770548.XA CN111750891B (en) 2020-08-04 2020-08-04 Method, computing device, and computer storage medium for information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010770548.XA CN111750891B (en) 2020-08-04 2020-08-04 Method, computing device, and computer storage medium for information processing

Publications (2)

Publication Number Publication Date
CN111750891A CN111750891A (en) 2020-10-09
CN111750891B true CN111750891B (en) 2022-07-12

Family

ID=72713004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010770548.XA Active CN111750891B (en) 2020-08-04 2020-08-04 Method, computing device, and computer storage medium for information processing

Country Status (1)

Country Link
CN (1) CN111750891B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112504292B (en) * 2020-11-18 2023-03-28 广东中电绿能科技有限公司 Navigation method, navigation device and mobile terminal based on consumption information
CN114973510B (en) * 2021-02-25 2023-10-20 博泰车联网科技(上海)股份有限公司 Automatic vending method and system based on vehicles
CN113744413A (en) * 2021-08-18 2021-12-03 南斗六星系统集成有限公司 Elevation matching method and system for vehicle on three-dimensional high-precision map road

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169639A (en) * 2011-01-21 2011-08-31 王岐顺 Parking place vehicle sensing device for vehicle cinema
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN107451946A (en) * 2017-08-11 2017-12-08 兰州大学 Huckster management method and system
CN109086973A (en) * 2018-07-10 2018-12-25 安徽云软信息科技有限公司 A kind of show ground Intelligentized regulating and controlling management system
CN109116397A (en) * 2018-07-25 2019-01-01 吉林大学 A kind of vehicle-mounted multi-phase machine vision positioning method, device, equipment and storage medium
TW201913510A (en) * 2017-08-25 2019-04-01 日商日本電氣股份有限公司 Shop device, store system, store management method and program
CN110147094A (en) * 2018-11-08 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system
CN110349432A (en) * 2019-06-28 2019-10-18 北京汽车集团有限公司 Parking stall preordering method, device, system and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325248B2 (en) * 2013-10-01 2019-06-18 Visa International Service Association Automobile mobile-interaction platform apparatuses, methods and systems
CN108960785B (en) * 2018-07-13 2020-10-27 维沃移动通信有限公司 Information prompting method and device
CN109064277B (en) * 2018-07-25 2022-05-24 北京小米移动软件有限公司 Commodity display method and device
CN109034973B (en) * 2018-07-25 2021-03-30 北京京东尚科信息技术有限公司 Commodity recommendation method, commodity recommendation device, commodity recommendation system and computer-readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169639A (en) * 2011-01-21 2011-08-31 王岐顺 Parking place vehicle sensing device for vehicle cinema
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN107451946A (en) * 2017-08-11 2017-12-08 兰州大学 Huckster management method and system
TW201913510A (en) * 2017-08-25 2019-04-01 日商日本電氣股份有限公司 Shop device, store system, store management method and program
CN109086973A (en) * 2018-07-10 2018-12-25 安徽云软信息科技有限公司 A kind of show ground Intelligentized regulating and controlling management system
CN109116397A (en) * 2018-07-25 2019-01-01 吉林大学 A kind of vehicle-mounted multi-phase machine vision positioning method, device, equipment and storage medium
CN110147094A (en) * 2018-11-08 2019-08-20 北京初速度科技有限公司 A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system
CN110349432A (en) * 2019-06-28 2019-10-18 北京汽车集团有限公司 Parking stall preordering method, device, system and electronic equipment

Also Published As

Publication number Publication date
CN111750891A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111750891B (en) Method, computing device, and computer storage medium for information processing
US11042751B2 (en) Augmented reality assisted pickup
CN109141464B (en) Navigation lane change prompting method and device
CN108318043A (en) Method, apparatus for updating electronic map and computer readable storage medium
JP7436655B2 (en) Vehicle parking management method, electronic device, and computer storage medium
CN107430815A (en) Method and system for automatic identification parking area
US20200182646A1 (en) Systems and methods for displaying map information
US11651689B2 (en) Method, apparatus, and computer program product for identifying street parking based on aerial imagery
CN110763250A (en) Method, device and system for processing positioning information
CN112750323A (en) Management method, apparatus and computer storage medium for vehicle safety
US20210055121A1 (en) Systems and methods for determining recommended locations
CN108267142B (en) Navigation display method and system based on address card and vehicle machine
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN115164918A (en) Semantic point cloud map construction method and device and electronic equipment
US20210375135A1 (en) Method for indicating parking position and vehicle-mounted device
US20220058825A1 (en) Attention guidance for correspondence labeling in street view image pairs
US20210383544A1 (en) Semantic segmentation ground truth correction with spatial transformer networks
CN114677848B (en) Perception early warning system, method, device and computer program product
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
CN114897686A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN114724107A (en) Image detection method, device, equipment and medium
CN114743395A (en) Signal lamp detection method, device, equipment and medium
CN112885087A (en) Method, apparatus, device and medium for determining road condition information and program product
CN115982306B (en) Method and device for identifying retrograde behavior of target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211015

Address after: Room 2010-2012, No.30 Tianyaoqiao Road, Xuhui District, Shanghai

Applicant after: SHANGHAI QWIK SMART TECHNOLOGY Co.,Ltd.

Address before: 210000 4th floor, tower C, Tengfei building, 88 Jiangmiao Road, Jiangbei new district, Nanjing City, Jiangsu Province

Applicant before: Botai Internet of vehicles (Nanjing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant