我有一个.csv文件充满了数据包头信息。开头几行:

28;03/07/2000;11:27:51;00:00:01;8609;4961;8609;097.139.024.164;131.084.001.031;0;-
29;03/07/2000;11:27:51;00:00:01;29396;4962;29396;058.106.180.191;131.084.001.031;0;-
30;03/07/2000;11:27:51;00:00:01;26290;4963;26290;060.075.194.137;131.084.001.031;0;-
31;03/07/2000;11:27:51;00:00:01;28324;4964;28324;038.087.169.169;131.084.001.031;0;-

总共大约有33k行(每行是来自不同数据包头的信息)。现在我需要使用源地址和目标地址计算熵。
使用我编写的代码:
def openFile(file_name):
    srcFile = open(file_name, 'r')
    enter code heredataset = []
    for line in srcFile:
        newLine = line.split(";")
        dataset.append(newLine)
    return dataset

我得到的回报看起来像
dataset = [
    ['28', '03/07/2000', '11:27:51', '00:00:01', '8609', '4961', '8609', '097.139.024.164', '131.084.001.031', '0', '-\n'],
    ['29', '03/07/2000', '11:27:51', '00:00:01', '29396', '4962', '29396', '058.106.180.191', '131.084.001.031', '0', '-\n'],
    ['30', '03/07/2000', '11:27:51', '00:00:01', '26290', '4963', '26290', '060.075.194.137', '131.084.001.031', '0', '-\n'],
    ['31', '03/07/2000', '11:27:51', '00:00:01', '28324', '4964', '28324', '038.087.169.169', '131.084.001.031', '0', '-']
]

我把它传递给我的熵函数:
#---- Entropy += - prob * math.log(prob, 2) ---------
def Entropy(data):
    entropy = 0
    counter = 0 # -- counter for occurances of the same ip address
    #-- For loop to iterate through every item in outer list
    for item in range(len(data)):
        #-- For loop to iterate through inner list
        for x in data[item]:
            if x == data[item][8]:
                counter += 1
        prob = float(counter) / len(data)
        entropy += -prob * math.log(prob, 2)
    print("\n")
    print("Entropy: {}".format(entropy))

代码运行时没有任何错误,但它给出了错误的熵,我觉得这是因为错误的概率计算(第二个for循环是可疑的)或错误的熵公式。如果没有另一个for循环,有没有办法找到IP发生的概率?欢迎编辑代码

最佳答案

使用numpy和内置collections模块可以大大简化代码:

import numpy as np
import collections

sample_ips = [
    "131.084.001.031",
    "131.084.001.031",
    "131.284.001.031",
    "131.284.001.031",
    "131.284.001.000",
]

C = collections.Counter(sample_ips)
counts  = np.array(C.values(),dtype=float)
prob    = counts/counts.sum()
shannon_entropy = (-prob*np.log2(prob)).sum()
print (shannon_entropy)

关于python - IP包信息的熵,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/27432078/

10-09 06:07