通过本地计算机或群集(Python,R,JavaScript,任何语言)的算法寻求帮助。

我有一个带有坐标的位置列表。

# R script
n <- 10
set.seed(1)
index <- paste0("id_",c(1:n))
lat <- runif(n, 32.0, 41)
lon <- runif(n, 84, 112)*(-1)
values <- as.integer(runif(n, 50, 100))
df <- data.frame(index, lat, lon, values, stringsAsFactors = FALSE)
names(df) <- c('loc_id','lat','lon', 'value')

   loc_id      lat        lon value
1    id_1 34.38958  -89.76729    96
2    id_2 35.34912  -88.94359    60
3    id_3 37.15568 -103.23664    82
4    id_4 40.17387  -94.75490    56
5    id_5 33.81514 -105.55556    63
6    id_6 40.08551  -97.93558    69
7    id_7 40.50208 -104.09332    50
8    id_8 37.94718 -111.77337    69
9    id_9 37.66203  -94.64099    93
10  id_10 32.55608 -105.76847    67


我需要在表中的每个位置找到3个壁橱位置。

这是我在R中的代码:

# R script
require(dplyr)
require(geosphere)

start.time <- Sys.time()
d1 <- df
sample <- 999999999999
distances <- list("init1" = sample, "init2" = sample, "init3" = sample)
d1$distances <- apply(d1, 1, function(x){distances})

n_rows = nrow(d1)
for (i in 1:(n_rows-1)) {
  # current location
  dot1 <- c(d1$lon[i], d1$lat[i])
  for (k in (i+1):n_rows) {
    # next location
    dot2 <- c(d1$lon[k], d1$lat[k])
    # distance between locations
    meters_between <- as.integer(distm(dot1, dot2, fun = distHaversine))

    # updating current location distances
    distances <- d1$distances[[i]]
    distances[d1$loc_id[k]] <- meters_between
    d1$distances[[i]] <- distances[order(unlist(distances), decreasing=FALSE)][1:3]

    # updating next location distances
    distances <- d1$distances[[k]]
    distances[d1$loc_id[i]] <- meters_between
    d1$distances[[k]] <- distances[order(unlist(distances), decreasing=FALSE)][1:3]
  }
}


但这需要太多时间:

# [1] "For 10 rows and 45 iterations takes 0.124729156494141 sec. Average sec 0.00277175903320313 per row."
# [1] "For 100 rows and 4950 iterations takes 2.54944682121277 sec. Average sec 0.000515039761861165 per row."
# [1] "For 200 rows and 19900 iterations takes 10.1178169250488 sec. Average sec 0.000508433011308986 per row."
# [1] "For 500 rows and 124750 iterations takes 73.7151870727539 sec. Average sec 0.000590903303188408 per row."


我在Python中做了同样的事情:

# Python script
import pandas as pd
import numpy as np

n = 10
np.random.seed(1)
data_m = np.random.uniform(0, 5, 5)
data = {'loc_id':range(1, n+1),
        'lat':np.random.uniform(32, 41, n),
        'lon':np.random.uniform(84, 112, n)*(-1),
        'values':np.random.randint(50, 100, n)}
df = pd.DataFrame(data)[['loc_id', 'lat', 'lon', 'values']]
df['loc_id'] = df['loc_id'].apply(lambda x: 'id_{0}'.format(x))
df = df.reset_index().drop('index', axis = 1).set_index('loc_id')

from geopy.distance import distance
from datetime import datetime

start_time = datetime.now()

sample = 999999999999
df['distances'] = np.nan
df['distances'] = df['distances'].apply(lambda x: [{'init1': sample}, {'init2': sample}, {'init3': sample}])

n_rows = len(df)

rows_done = 0
for i, row_i in df.head(n_rows-1).iterrows():
    dot1 = (row_i['lat'], row_i['lon'])
    rows_done = rows_done + 1
    for k, row_k in df.tail(n_rows-rows_done).iterrows():
        dot2 = (row_k['lat'], row_k['lon'])
        meters_between = int(distance(dot1,dot2).meters)
        distances = df.at[i, 'distances']
        distances.append({k: meters_between})
        distances_sorted = sorted(distances, key=lambda x: x[next(iter(x))])[:3]
        df.at[i, 'distances'] = distances_sorted
        distances = df.at[k, 'distances']
        distances.append({i: meters_between})
        distances_sorted = sorted(distances, key=lambda x: x[next(iter(x))])[:3]
        df.at[k, 'distances'] = distances_sorted

print df


几乎相同的性能。

有人知道是否有更好的方法?在我的任务中,必须完成90000个位置。甚至考虑过Hadoop / MpRc / Spark,但不知道如何在分布式模式下进行。

我很高兴听到任何想法或建议。

最佳答案

如果欧几里得距离合适,那么nn2使用kd-trees和C代码,因此它应该很快:

library(RANN)
nn2(df[2:3], k = 4)


在我不是特别快的笔记本电脑上,这总共花费了0.06至0.11秒,处理n = 10,000行,而对于90,000行则花费了1.00至1.25秒。

关于python - 如何以更有效的方式找到位置列表的最近位置?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/52048692/

10-12 17:10
查看更多