回归算法可以很好地表现为数字。很清楚如何对包含数字的数据进行回归并预测输出。但是,我需要对包含分类特征的数据进行回归分析。我有一个csv文件,其中包含两列install-id和page-name都是对象类型。我需要提供install-id作为输入,并将页面名称预测为输出。下面是我的代码。请帮助我。
import pandas as pd
data = pd.read_csv("/Users/kashifjilani/Downloads/csv/newjsoncontent.csv")
X = data["install-id"]
Y = data["endPoint"]
X = pd.get_dummies(data=X, drop_first=True)
from sklearn import linear_model
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = .20, random_state = 40)
regr = linear_model.LinearRegression()
regr.fit(X_train, Y_train)
predicted = regr.predict(X_test)
最佳答案
对于演示,假设您具有此数据框,其中IQ
和Gender
是输入要素。目标变量是Test Score
。
| Student | IQ | Gender | Test Score |
|----------:|-----:|:---------|-------------:|
| 1 | 125 | Male | 93 |
| 2 | 120 | Female | 86 |
| 3 | 115 | Male | 96 |
| 4 | 110 | Female | 81 |
| 5 | 105 | Male | 92 |
| 6 | 100 | Female | 75 |
| 7 | 95 | Male | 84 |
| 8 | 90 | Female | 77 |
| 9 | 85 | Male | 73 |
| 10 | 80 | Female | 74 |
在这里,
IQ
是数字,而Gender
是分类特征。在预处理步骤中,我们将在数字上应用简单的imputer,在分类特征上应用单热编码器。您可以使用sklearn's
Pipeline
和ColumnTransformer
功能。然后,您可以使用选择的模型轻松地进行训练和预测。import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn import linear_model
# defining the data
d = {
"Student": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"IQ": [125, 120, 115, 110, 105, 100, 95, 90, 85, 80,],
"Gender": [
"Male",
"Female",
"Male",
"Female",
"Male",
"Female",
"Male",
"Female",
"Male",
"Female",
],
"Test Score": [93, 86, 96, 81, 92, 75, 84, 77, 73, 74],
}
# converting into pandas dataframe
df = pd.DataFrame(d)
# setting the student id as index to keep track
df = df.set_index("Student")
# column transformation
categorical_columns = ["Gender"]
numerical_columns = ["IQ"]
# determine X
X = df[categorical_columns + numerical_columns]
y = df["Test Score"]
# train test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42, test_size=0.3
)
# categorical pipeline
categorical_pipe = Pipeline([("onehot", OneHotEncoder(handle_unknown="ignore"))])
# numerical pipeline
numerical_pipe = Pipeline([("imputer", SimpleImputer(strategy="mean")),])
# aggregating both the pipeline
preprocessing = ColumnTransformer(
[
("cat", categorical_pipe, categorical_columns),
("num", numerical_pipe, numerical_columns),
]
)
rf = Pipeline(
[("preprocess", preprocessing), ("classifier", linear_model.LinearRegression())]
)
# train
rf.fit(X_train, y_train)
# predict
predict = rf.predict(X_test)
由此可见,
>> array([84.48275862, 84.55172414, 79.13793103])