目录
一、用于训练的数据架构
Azure 机器学习的图像 AutoML 要求以 JSONL(JSON 行)格式准备输入图像数据。 本部分介绍多类图像分类、多标签图像分类、对象检测和实例分段的输入数据格式或架构。 我们还将提供最终训练或验证 JSON 行文件的示例。
图像分类(二进制/多类)
每个 JSON 行中的输入数据格式/架构:
{
"image_url":"azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image>",
"image_details":{
"format":"image_format",
"width":"image_width",
"height":"image_height"
},
"label":"class_name",
}
多类图像分类的 JSONL 文件示例:
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_01.jpg", "image_details":{"format": "jpg", "width": "400px", "height": "258px"}, "label": "can"}
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_02.jpg", "image_details": {"format": "jpg", "width": "397px", "height": "296px"}, "label": "milk_bottle"}
.
.
.
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_n.jpg", "image_details": {"format": "jpg", "width": "1024px", "height": "768px"}, "label": "water_bottle"}
多标签图像分类
下面是每个 JSON 行中用于图像分类的输入数据格式/架构示例。
{
"image_url":"azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image>",
"image_details":{
"format":"image_format",
"width":"image_width",
"height":"image_height"
},
"label":[
"class_name_1",
"class_name_2",
"class_name_3",
"...",
"class_name_n"
]
}
多标签图像分类的 JSONL 文件示例:
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_01.jpg", "image_details":{"format": "jpg", "width": "400px", "height": "258px"}, "label": ["can"]}
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_02.jpg", "image_details": {"format": "jpg", "width": "397px", "height": "296px"}, "label": ["can","milk_bottle"]}
.
.
.
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_n.jpg", "image_details": {"format": "jpg", "width": "1024px", "height": "768px"}, "label": ["carton","milk_bottle","water_bottle"]}
对象检测
下面是用于对象检测的示例 JSONL 文件。
{
"image_url":"azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image>",
"image_details":{
"format":"image_format",
"width":"image_width",
"height":"image_height"
},
"label":[
{
"label":"class_name_1",
"topX":"xmin/width",
"topY":"ymin/height",
"bottomX":"xmax/width",
"bottomY":"ymax/height",
"isCrowd":"isCrowd"
},
{
"label":"class_name_2",
"topX":"xmin/width",
"topY":"ymin/height",
"bottomX":"xmax/width",
"bottomY":"ymax/height",
"isCrowd":"isCrowd"
},
"..."
]
}
其中:
xmin
= 边界框左上角的 x 坐标ymin
= 边界框左上角的 y 坐标xmax
= 边界框右下角的 x 坐标ymax
= 边界框右下角的 y 坐标
用于对象检测的 JSONL 文件示例:
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_01.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "can", "topX": 0.260, "topY": 0.406, "bottomX": 0.735, "bottomY": 0.701, "isCrowd": 0}]}
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_02.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "carton", "topX": 0.172, "topY": 0.153, "bottomX": 0.432, "bottomY": 0.659, "isCrowd": 0}, {"label": "milk_bottle", "topX": 0.300, "topY": 0.566, "bottomX": 0.891, "bottomY": 0.735, "isCrowd": 0}]}
.
.
.
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_n.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "carton", "topX": 0.0180, "topY": 0.297, "bottomX": 0.380, "bottomY": 0.836, "isCrowd": 0}, {"label": "milk_bottle", "topX": 0.454, "topY": 0.348, "bottomX": 0.613, "bottomY": 0.683, "isCrowd": 0}, {"label": "water_bottle", "topX": 0.667, "topY": 0.279, "bottomX": 0.841, "bottomY": 0.615, "isCrowd": 0}]}
实例分段
对于实例分段,自动化 ML 仅支持多边形作为输入和输出,不支持掩码。
下面是实例分段的示例 JSONL 文件。
{
"image_url":"azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image>",
"image_details":{
"format":"image_format",
"width":"image_width",
"height":"image_height"
},
"label":[
{
"label":"class_name",
"isCrowd":"isCrowd",
"polygon":[["x1", "y1", "x2", "y2", "x3", "y3", "...", "xn", "yn"]]
}
]
}
实例分段的 JSONL 文件示例:
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_01.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "can", "isCrowd": 0, "polygon": [[0.577, 0.689, 0.567, 0.689, 0.559, 0.686, 0.380, 0.593, 0.304, 0.555, 0.294, 0.545, 0.290, 0.534, 0.274, 0.512, 0.2705, 0.496, 0.270, 0.478, 0.284, 0.453, 0.308, 0.432, 0.326, 0.423, 0.356, 0.415, 0.418, 0.417, 0.635, 0.493, 0.683, 0.507, 0.701, 0.518, 0.709, 0.528, 0.713, 0.545, 0.719, 0.554, 0.719, 0.579, 0.713, 0.597, 0.697, 0.621, 0.695, 0.629, 0.631, 0.678, 0.619, 0.683, 0.595, 0.683, 0.577, 0.689]]}]}
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_02.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "carton", "isCrowd": 0, "polygon": [[0.240, 0.65, 0.234, 0.654, 0.230, 0.647, 0.210, 0.512, 0.202, 0.403, 0.182, 0.267, 0.184, 0.243, 0.180, 0.166, 0.186, 0.159, 0.198, 0.156, 0.396, 0.162, 0.408, 0.169, 0.406, 0.217, 0.414, 0.249, 0.422, 0.262, 0.422, 0.569, 0.342, 0.569, 0.334, 0.572, 0.320, 0.585, 0.308, 0.624, 0.306, 0.648, 0.240, 0.657]]}, {"label": "milk_bottle", "isCrowd": 0, "polygon": [[0.675, 0.732, 0.635, 0.731, 0.621, 0.725, 0.573, 0.717, 0.516, 0.717, 0.505, 0.720, 0.462, 0.722, 0.438, 0.719, 0.396, 0.719, 0.358, 0.714, 0.334, 0.714, 0.322, 0.711, 0.312, 0.701, 0.306, 0.687, 0.304, 0.663, 0.308, 0.630, 0.320, 0.596, 0.32, 0.588, 0.326, 0.579]]}]}
.
.
.
{"image_url": "azureml://subscriptions/my-subscription-id/resourcegroups/my-resource-group/workspaces/my-workspace/datastores/my-datastore/paths/image_data/Image_n.jpg", "image_details": {"format": "jpg", "width": "499px", "height": "666px"}, "label": [{"label": "water_bottle", "isCrowd": 0, "polygon": [[0.334, 0.626, 0.304, 0.621, 0.254, 0.603, 0.164, 0.605, 0.158, 0.602, 0.146, 0.602, 0.142, 0.608, 0.094, 0.612, 0.084, 0.599, 0.080, 0.585, 0.080, 0.539, 0.082, 0.536, 0.092, 0.533, 0.126, 0.530, 0.132, 0.533, 0.144, 0.533, 0.162, 0.525, 0.172, 0.525, 0.186, 0.521, 0.196, 0.521 ]]}, {"label": "milk_bottle", "isCrowd": 0, "polygon": [[0.392, 0.773, 0.380, 0.732, 0.379, 0.767, 0.367, 0.755, 0.362, 0.735, 0.362, 0.714, 0.352, 0.644, 0.352, 0.611, 0.362, 0.597, 0.40, 0.593, 0.444, 0.494, 0.588, 0.515, 0.585, 0.621, 0.588, 0.671, 0.582, 0.713, 0.572, 0.753 ]]}]}
二、用于联机评分的数据架构
在本部分中,我们将记录在使用部署的模型时进行预测所需的输入数据格式。
输入格式
以下 JSON 是使用特定于任务的模型终结点对任何任务生成预测所需的输入格式。
{
"input_data": {
"columns": [
"image"
],
"data": [
"image_in_base64_string_format"
]
}
}
此 json 为具有外部键 input_data
和内部键 columns
、data
的字典,如下表所述。 终结点接受采用上述格式的 json 字符串,并将其转换为评分脚本所需的示例的数据帧。 Json 的 request_json["input_data"]["data"]
部分中的每个输入图像都是 base64 编码字符串。
部署 mlflow 模型后,我们可以使用以下代码段来获取所有任务的预测。
# Create request json
import base64
sample_image = os.path.join(dataset_dir, "images", "1.jpg")
def read_image(image_path):
with open(image_path, "rb") as f:
return f.read()
request_json = {
"input_data": {
"columns": ["image"],
"data": [base64.encodebytes(read_image(sample_image)).decode("utf-8")],
}
}
import json
request_file_name = "sample_request_data.json"
with open(request_file_name, "w") as request_file:
json.dump(request_json, request_file)
resp = ml_client.online_endpoints.invoke(
endpoint_name=online_endpoint_name,
deployment_name=deployment.name,
request_file=request_file_name,
)
输出格式
根据任务类型,对模型终结点进行的预测遵循不同的结构。 本部分将探讨多类、多标签图像分类、对象检测和实例分段任务的输出数据格式。
当输入请求包含一个图像时,以下架构适用。
图像分类(二进制/多类)
图像分类的终结点返回数据集中的所有标签及其在输入图像中的概率分数,格式如下: visualizations
和 attributions
与可解释性相关,并且当请求仅用于评分时,输出中将不会包括这些键。 有关图像分类的可解释性输入和输出架构的详细信息,请参阅[图像分类的可解释性部分]。
[
{
"probs": [
2.098e-06,
4.783e-08,
0.999,
8.637e-06
],
"labels": [
"can",
"carton",
"milk_bottle",
"water_bottle"
]
}
]
多标签图像分类
对于多标签图像分类,模型终结点返回标签及其概率。 visualizations
和 attributions
与可解释性相关,并且当请求仅用于评分时,输出中将不会包括这些键。 有关多标签分类的可解释性输入和输出架构的详细信息,请参阅[图像分类多标签的可解释性部分]。
[
{
"probs": [
0.997,
0.960,
0.982,
0.025
],
"labels": [
"can",
"carton",
"milk_bottle",
"water_bottle"
]
}
]
对象检测
对象检测模型返回多个框,其中包含缩放后的左上角和右下角坐标,以及框标签和置信度分数。
[
{
"boxes": [
{
"box": {
"topX": 0.224,
"topY": 0.285,
"bottomX": 0.399,
"bottomY": 0.620
},
"label": "milk_bottle",
"score": 0.937
},
{
"box": {
"topX": 0.664,
"topY": 0.484,
"bottomX": 0.959,
"bottomY": 0.812
},
"label": "can",
"score": 0.891
},
{
"box": {
"topX": 0.423,
"topY": 0.253,
"bottomX": 0.632,
"bottomY": 0.725
},
"label": "water_bottle",
"score": 0.876
}
]
}
]
实例分段
在实例分段中,输出包含多个框,其中包含缩放后的左上角和右下角坐标、标签、置信度和多边形(非掩码)。 此处,多边形值与我们在[架构部分]中讨论的格式相同。
[
{
"boxes": [
{
"box": {
"topX": 0.679,
"topY": 0.491,
"bottomX": 0.926,
"bottomY": 0.810
},
"label": "can",
"score": 0.992,
"polygon": [
[
0.82, 0.811, 0.771, 0.810, 0.758, 0.805, 0.741, 0.797, 0.735, 0.791, 0.718, 0.785, 0.715, 0.778, 0.706, 0.775, 0.696, 0.758, 0.695, 0.717, 0.698, 0.567, 0.705, 0.552, 0.706, 0.540, 0.725, 0.520, 0.735, 0.505, 0.745, 0.502, 0.755, 0.493
]
]
},
{
"box": {
"topX": 0.220,
"topY": 0.298,
"bottomX": 0.397,
"bottomY": 0.601
},
"label": "milk_bottle",
"score": 0.989,
"polygon": [
[
0.365, 0.602, 0.273, 0.602, 0.26, 0.595, 0.263, 0.588, 0.251, 0.546, 0.248, 0.501, 0.25, 0.485, 0.246, 0.478, 0.245, 0.463, 0.233, 0.442, 0.231, 0.43, 0.226, 0.423, 0.226, 0.408, 0.234, 0.385, 0.241, 0.371, 0.238, 0.345, 0.234, 0.335, 0.233, 0.325, 0.24, 0.305, 0.586, 0.38, 0.592, 0.375, 0.598, 0.365
]
]
},
{
"box": {
"topX": 0.433,
"topY": 0.280,
"bottomX": 0.621,
"bottomY": 0.679
},
"label": "water_bottle",
"score": 0.988,
"polygon": [
[
0.576, 0.680, 0.501, 0.680, 0.475, 0.675, 0.460, 0.625, 0.445, 0.630, 0.443, 0.572, 0.440, 0.560, 0.435, 0.515, 0.431, 0.501, 0.431, 0.433, 0.433, 0.426, 0.445, 0.417, 0.456, 0.407, 0.465, 0.381, 0.468, 0.327, 0.471, 0.318
]
]
}
]
}
]
在线评分和可解释性 (XAI) 的数据格式
本部分阐述了在使用部署的模型时进行预测并为预测的类生成解释所需的输入数据格式。 无需单独部署即可生成解释。 在线评分的相同终结点可用于生成解释。 我们只需要在输入架构中传递一些额外的可解释性相关参数即可获得解释和/或属性分数矩阵(像素级解释)的可视化效果。
支持的可解释性方法:
- XRAI (xrai)
- 集成渐变 (integrated_gradients)
- 引导式 GradCAM (guided_gradcam)
- 引导式反向传播 (guided_backprop)
输入格式 (XAI)
支持以下输入格式,以使用特定于任务的模型终结点生成对任何分类任务的预测和解释。 部署模型后,我们可以使用以下架构来获取预测和解释。
{
"input_data": {
"columns": ["image"],
"data": [json.dumps({"image_base64": "image_in_base64_string_format",
"model_explainability": True,
"xai_parameters": {}
})
]
}
}
除了图像,输入架构中还需要两个额外的参数(model_explainability
和 xai_parameters
)才能生成解释。
下表描述了可解释性支持的架构。
request_json
中的每个输入图像(在以下代码中定义)都是附加到列表 request_json["input_data"]["data"]
的 base64 编码字符串:
import base64
import json
# Get the details for online endpoint
endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
sample_image = "./test_image.jpg"
# Define explainability (XAI) parameters
model_explainability = True
xai_parameters = {"xai_algorithm": "xrai",
"visualizations": True,
"attributions": False}
def read_image(image_path):
with open(image_path, "rb") as f:
return f.read()
# Create request json
request_json = {
"input_data": {
"columns": ["image"],
"data": [json.dumps({"image_base64": base64.encodebytes(read_image(sample_image)).decode("utf-8"),
"model_explainability": model_explainability,
"xai_parameters": xai_parameters})],
}
}
request_file_name = "sample_request_data.json"
with open(request_file_name, "w") as request_file:
json.dump(request_json, request_file)
resp = ml_client.online_endpoints.invoke(
endpoint_name=online_endpoint_name,
deployment_name=deployment.name,
request_file=request_file_name,
)
predictions = json.loads(resp)
输出格式 (XAI)
根据任务类型,对模型终结点进行的预测遵循不同的架构。 本部分介绍多类、多标签图像分类任务的输出数据格式。
以下架构是针对两个输入图像的情况定义的。
图像分类(二进制/多类)
除包含 visualizations
和 attributions
键值(如果这些键在请求中设为 True
)以外,输出架构[与上述架构相同]。
如果在输入请求中将 model_explainability
、visualizations
和 attributions
设置为 True
,则输出将具有 visualizations
和 attributions
。 下表解释了有关这些参数的更多详细信息。 将针对概率分数最高的类生成可视化效果和属性。
[
{
"probs": [
0.006,
9.345e-05,
0.992,
0.003
],
"labels": [
"can",
"carton",
"milk_bottle",
"water_bottle"
],
"visualizations": "iVBORw0KGgoAAAAN.....",
"attributions": [[[-4.2969e-04, -1.3090e-03, 7.7791e-04, ..., 2.6677e-04,
-5.5195e-03, 1.7989e-03],
.
.
.
[-5.8236e-03, -7.9108e-04, -2.6963e-03, ..., 2.6517e-03,
1.2546e-03, 6.6507e-04]]]
}
]
多标签图像分类
与多类分类相比,多标签分类的输出架构的唯一区别是,每个图像中可以有多个类,可以为每个类生成解释。 因此,visualizations
是 base64 图像字符串的列表,attributions
是基于 confidence_score_threshold_multilabel
(默认值为 0.5)的每个选定类的属性分数列表。
如果在输入请求中将 model_explainability
、visualizations
和 attributions
设置为 True
,则输出将具有 visualizations
和 attributions
。 下表解释了有关这些参数的更多详细信息。 针对概率分数大于或等于 confidence_score_threshold_multilabel
的所有类生成可视化和属性。
警告
在联机终终结点上生成解释时,请确保仅根据置信度分数选择几个类,以避免终结点上出现超时问题,或者将终结点与 GPU 实例类型一起使用。 要生成多标签分类中大量类的说明,请参阅批量评分笔记本 (SDK v1)。
[
{
"probs": [
0.994,
0.994,
0.843,
0.166
],
"labels": [
"can",
"carton",
"milk_bottle",
"water_bottle"
],
"visualizations": ["iVBORw0KGgoAAAAN.....", "iVBORw0KGgoAAAAN......", .....],
"attributions": [
[[[-4.2969e-04, -1.3090e-03, 7.7791e-04, ..., 2.6677e-04,
-5.5195e-03, 1.7989e-03],
.
.
.
[-5.8236e-03, -7.9108e-04, -2.6963e-03, ..., 2.6517e-03,
1.2546e-03, 6.6507e-04]]],
.
.
.
]
}
]
对象检测
警告
XAI 不受支持。 因此只返回分数。 有关评分示例,请参阅[在线评分部分]。
实例分段
警告
XAI 不受支持。 因此只返回分数。 有关评分示例,请参阅[在线评分部分]。